Nonwoven for iOS (2013)
Nonwoven:(adj.) A fabric-like material made of interlocked fibers, held together by chemical, mechanical or thermal means.
Nonwoven is a mobile painting application for iOS devices. When the user touches the screen, invisible “pins” are placed on the drawing area. Then, patterns are formed by connecting these pins within the user defined range with transparent threads, thus creating weblike forms. The pins can be generated from a given photograph for generation, as well as being placed by the classical “painting” gesture.
The color, transparency, threading range, blending mode, radial symmetry, thickness and many other options are available for manipulation to create a wide range of effects that can be used for generative painting, finger painting, calligraphic effect or photo manipulation. The generation algorithm is prototyped on Processing and ported to iOS. In other words, the system is portable and it will be available on other platforms, hopefully with alternative modes of interaction, other than a mouse and a keyboard.
I hope you’ll find this documentation useful and like the Nonwoven application.
If you have any suggestions or comments, be sure to send me an email or contact me at Nonwoven’s Facebook Page. There I’ll share news and users’ works. Also, I always have promo codes more than I need. I’ll be sharing them in small batches as well.
The application is temporarily unavailable on the app store
The user interaction is divided into two areas, the Styling Menu and the Context Menu. In the iPad interface, these two menus are placed side by side and in the iPhone interface they are split into two tabs. The options in the context menu changes depending on the current mode of interaction. Since there are only two different contents for the context menu, they will be referred as “the Drawing Context” and the “the Generation Context”
The styling menu controls the visual appearance of the threads.
Opacity slider controls the opacity of a single thread. The following image shows three strokes with the opacity values 10%, 50% and 100% respectively.
Thickness slider controls the thickness of a single thread. The following image shows three strokes (at 10% opacity) with 1, 5 and 10 point thickness values respectively.
Brush Size slider controls the over all thickness of the stroke (not the individual threads). The following two strokes are both at 10% opacity. First one has 3 points thickness and 5 points brush size, the second one has 1 point thickness and 20 points brush size, the third one has 1 point brush size and thickness.
Length Limit slider controls the maximum length a thread can have. The following strokes have 10% opacity, 1 point thickness and 5 point brush size. Their length limits are 5, 20, 50 respectively.
The blending Mode controls how the new drawn threads’ colors interact with the existing image.
In normal blending mode, the thread is drawn over the existing image with the selected color and opacity.
In multiply blending mode, white is transparent and the color acts like ink; dark colors get darker when drawn over and over. The following two rectangles are drawn in normal and multiply modes respectively with 10% opacity, 1 point thickness, 5 points brush size, 20 points length limit and RGB value: 235, 0, 0 (slightly dark red)
In screen blending mode, black is transparent and the color acts like light; light colors get brighter when drawn over and over. The following two rectangles are drawn in normal and screen modes respectively with 10% opacity, 1 point thickness, 5 points brush size, 20 points length limit and RGB value: 255, 40, 40 (slightly bright red)
Using different blending modes over each other can be used for effects such as masking. The following image shows a dark blue patch blended in multiply mode, with pink and green strokes blended in screen mode. As screen blending makes anything invisible over a white canvas, green and pink strokes are only visible within the area of the blue patch.
The following example shows a black stroke blended in normal mode, over a large black patch in screen mode. Even though the black patch becomes invisible in screen mode, it still provides pins for the new strokes to attach. Therefore, the second stroke has a “hairy” look, extending out to invisible points of attraction.
The Color Picker Menu allows the user to change the canvas and the thread color. You can either drag your finger on the large color picker area to change the hue and the brightness, or use the sliders to select the color you like.
Sliders can work both in HSB (hue, saturation, brightness) and RGB (red, green, blue) modes so that you can choose the one that feels easier. When the scheme is changed, the sliders are automatically adjusted to match the currently selected color. Although RGB is more conventional, HSB scheme is much more intuitive than RGB once you get used to it.
The following capture of the iOS simulator shows how the color picker area works.
The background color changes only if the canvas is empty. If there is an existing drawing, then the new background color will be applied on the next clear (see drawing context/actions).
While in drawing mode, context menu displays drawing specific options. The drawing context has the following subcategories: Symmetry, Actions, Canvas Shape (iPad only), Input Output, Preview (iPad only)
Radial Symmetry option enables the user to draw copies of the stroke simultaneously around the center of the canvas. The number of the copies are represented by the symmetry guides. The following two images show a single stroke drawn with 3 and 7 radial copies respectively. (10% opacity, 1p thickness, 10p brush size, 20p length, dark green multiply blending)
Mirror Horizontally and Mirror Vertically options turn on and off corresponding modes of reflective symmetries. The following images display horizontal and vertical symmetries. (10% opacity, 1p thickness, 5p brush size, 20p length, dark green multiply blending)
The following image shows the result when both reflective symmetries are on.
Reflective and Radial symmetry work mutually exclusively (i.e. turning one on turns the other one off).
Undo button undos the last action, including the undo itself. (in other words, it’s a undo/redo button)
Clear button deletes the current drawing and creates a new, empty canvas.
Disconnect button deletes the existing pins but keeps the drawing. This way, the new threads will not interact with the existing drawing and act like a new layer on top of the existing one. The following two images show a red stroke drawn over a black one. In the second image, disconnect button is pressed before the red stroke. (10% opacity, 1p thickness, 5p brush, 60p length) Notice how in the second image, the strokes only bind onto themselves since they are disconnected.
Canvas Shape (iPad only)
Unlike the iPhone screen, the iPad screen size is large enough to support square canvas without making it look ridiculous and sad. Also symmetrical drawings usually look better in square, whereas rectangle canvas supports a larger painting area. In case of the auto-generation (see auto-generation below), some photographs work better with rectangle canvas and others work better with square. Therefore, in the iPad version, the user has the chance to decide on the shape of the canvas. Also, I like squares.
Square and Rectangle buttons under the Canvas Shape subcategory resets the canvas in the selected shape. When the application is using a rectangle canvas (default in the iPhone version), the menu disappears when the drawing area is touched. While working with a square canvas however, the menu does not fade away since its size is exactly what is left from a square canvas so it does not occlude anything while visible.
Buttons under the I/O subcategory lets the user to load, save and share images.
Save & Share Button triggers iOS’ native sharing menu, as displayed below.
Services are available if the device is logged into the desired social network. E.g. in order to share on Facebook, the phone or the pad should be logged into Facebook through the settings page. Facebook app is not necessary.
Load Button lets the user to import images into the application. An image can be used in two different ways: It can be used for auto-generation: extracting pins from the image to automatically generate the threads, or load it as a background. The image can either be picked from the camera or the gallery. Upon clicking the Load button, the four resulting combinations can be chosen from the menu below:
By loading an image as background, the user is able to draw over an existing image, such as the strokes drawn over the out of focus marina lights below. (3% opacity, 1p thickness, 3p brush, 45p length, 5 radial symmetry, screen blending, light orange)
Loading an image for auto-generation has its own section and explained below.
Preview (iPad only)
The styling options may create a wide range of effects and it’s not always easy to guess the output. Therefore, the preview screen generates a pre-defined golden spiral with the styling options: opacity, thickness, length limit, blending and colors. The following images show different renderings of the preview screen.
As explained above, the painting process is executed by drawing threads between the pins on the canvas. So, technically speaking, the creation of the pins and the drawing of the image are completely independent events; As long as there are pins, Nonwoven can draw threads between them. Just like the preview pane with the predefined spiral.
When an image is loaded for the auto-generation process, the pins are extracted based on the two following features of the image: brightness and edge detection. The user is able to tweak how the pins are generated through the Generation Context Menu, explained below.
Before that, let me show you this loosely auto generated image of one of my cats. His name is Meriç. Meriç’s hobbies include sleeping on the sofa, meowing during my dinner for inducing guilt to get some of my food and modeling for experimental mobile drawing applications.
When an image is loaded for auto generation, the application displays the pins generated with the initial settings of the Generation Context. Please note that these settings do not have numerical values displayed like the styling options. Because they control values such as the brightness of a pixel, so any kind of representation other than 0 to 1 mapping would be my own abstraction, and the slider itself is a visual representation of the intermediate values between zero and one.
The settings in the generation context defines which pixels have the chance to be a pin. The following examples will make use this image as a base:
The Edge Threshold slider controls the threshold for edge detection. Lower values generate more pronounced edges. The highest threshold means disregarding edge information entirely. The following four screenshots show four different edge values with the brightness information excluded.
The Edge Separation slider controls the distance between the points in the detected edges. This comes in handy especially with images such as this one, which has too many detected edges in the areas like the hair. The following two images show the same set of generated pins at a very low edge threshold level. The first one has the lowest separation value, whereas the second one has a considerably higher separation. Like the examples above, the brightness information is excluded.
The Darkness Threshold slider defines the minimum level of black that can be a pin. This way, it is possible to decide what level of darkness will be included in the final image. The following images show the pins generated by using all levels of black (slider at zero) and excluding dark grays (slider near the middle) respectively. Edge values are excluded for the sake of this example.
Notice how the pins in the eye pupils disappear in the second image as they are exlcluded.
The Lightness Threshold slider defines the maximum level of white that can be a pin. This way, it is possible to decide what level of brightness will be used for creating pins. The following images show the pins generated by using all levels of white except light gray (slider near the beginning) and by using only dark grays (slider near the end).
The photograph that is used here has light gray background, so it has to be excluded to get a reasonable output.
The Luminosity Separation slider controls the distance between the points in the generated patches of dark and bright areas, just like the edge separation for edge detection. The following two images show the same set of generated pins. The first one has the lowest separation value, whereas the second one has a considerably higher separation.
When all these options are used together, the resulting pin output include both edge and brightness information. The following image is the way I decided to be a nice input for generation: Luminosity separation is fairly high and I use all the blacks. Light grays are excluded. Edges are fairly pronounced and they are close to each other.
Remember that the styling menu is always accessible at any point. Thus it is possible to review the styling options before generation. For this image, I prefer to use 7% opacity, 1p thickness, 20p length, light warm gray background and dark red threads with multiply blending.
The threading phase is no different than painting with finger, as explained above. So, the produced image interacts with the new strokes the same way as the hand drawn shapes interact with each other. Please enjoy my portrait below for new night terrors. As you can see, the new strokes in my eyes and around my lips thread out to the generated image and paint as if the existing portrait was drawn by the regular painting process.
Sometimes the generation process can take a very long time due to the fact that there might be millions of threads drawn for the generation. To avoid waiting for minutes in darkness or having to force quit the application to cancel the generation, the progress view with the option to interrupt appears over the interface during the generation.
Interrupting the process yields an interesting result. When the threading process is interrupted, the canvas reveals the half-baked image. But, since the pins are already there and what’s missing is the threads, drawing over this half baked image still interacts with the existing pins. Notice how the randomly painted patch reaches out to invisible points, getting denser around the missing eye and the hair.
This interruption effect was actually an overlooked design issue that I realized during the development phase and deliberately left it as it might be used for the sake of the design. Removing it is simple as touching the clear button.