Exploring Disabled Artists’ Experiences of Using Eye Gaze Tracking “In The Wild”

One of the main observations made from the initial technology evaluation sessions we conducted with disabled artists was that the technology should not just focus on supporting creative work, but also all of the other tasks that are involved around artistic practice.

This can include responding to email correspondence, managing budgets, conducting research online, updating a website or blog, and numerous other important tasks. These types of activities can be particularly time-consuming for some disabled artists as they are typically completed on a computer.

This can cause significant barriers for people with physical impairments who may be unable to use a “standard” mouse or keyboard for controlling their computer. In such situations, artists have to find clunky workarounds that at least provide some access to the software and applications they require (or they have to be reliant on someone else to assist them).

This additional time spent doing administrative work takes time away from creative pursuits and can become incredibly frustrating and tedious. We therefore need more tools that better support all elements of a disabled artist’s practice to ensure that they can maximise the time spent producing creative work. We wanted to explore this further in the last stage of work on the D2ART project and this post provides an overview of a recent field study we conducted.

In this study we wanted to give artists some technology to evaluate in their own working environment over an extended period of time. One of the main motivations for this is that it can be difficult to get a genuine sense of how people find using new technologies in shorter laboratory based studies. We felt that conducting these tests “in the wild” would provide a deeper insight into the potential of assistive tools for influencing artistic practice.

We gave artists a Windows laptop, a Tobii EyeX sensor, and the OptiKey software (pre-installed onto the laptop). OptiKey is open-source software that provides the ability to communicate with others via eye gaze and text-to-speech technology – it also provides features for navigating the Windows interface and other software (see this post for an overview of OptiKey).

OptiKey – A FREE Communication and Windows Control Tool

We met with artists to initially provide them with the equipment and to show them how to set everything up. In that first session, the artists were also provided with an overview of how OptiKey works and asked to complete a few simple tasks to enable us to collect some baseline data (e.g. emailing someone, writing some text, and browsing to a specific website).

Artists were asked to use the tools for at least a week and were given a series of daily tasks to complete. This included activities such as sending emails, browsing for something on the web, and using other “media” websites such as YouTube to explore how accessible they were to use.

We also simulated the process of the artist being commissioned for DASH’s Art Express project. Artists therefore received an automated email every couple of days (which appeared to be from DASH) and were asked to respond to any requests made – these included editing and updating a budget spreadsheet, completing an online profile form, and digitally signing a contract.

All tasks were designed to take no longer than 15 minutes to complete – artists were asked to spend this amount of time on each task and not to feel obliged to carry on if they had not completed it in that time-frame (although they were of course free to carry on). They were also asked to keep a journal to document interesting observations and highlight any issues they may have experienced.

After artists had been briefed we left the equipment with them to carry on with the tasks in their own time. We then returned to pick up the equipment (between 1-3 weeks later) and asked them to again complete the initial three tasks we set in the first session. We also interviewed all artists to explore further how they found using the technology and how they felt it could influence their artistic practice.

This study has highlighted some very interesting points around use of the technology and broader impact on artistic practice – we’re now looking at bringing all the work together and publishing everything in the near future.

As always, please get in touch if you’d like any further details around the project.

Disabled Artist Experiences in Using Eye/Head Tracking and Mid-Air Gesturing for Creative Work

In this post we wanted to share an overview of some of the initial findings from the technology evaluation sessions we recently conducted with artists. In particular, we wanted to share some of the observations and feedback we’ve received from artists in relation to eye/head tracking and mid-air gesturing for artistic work.

We’ve described the evaluation session structure in a previous post, but as a quick recap we explored the use of eye gaze (via the Tobii EyeX sensor), head tracking (through a standard webcam and EnableViacam), and mid-air gesturing (using a Leap Motion controller) for graphical and creative work (in addition to other hardware and software).

One of the primary aims of the sessions was to examine how artists found using this technology out-of-the-box with existing platforms and software. We adapted the sessions depending on each artist’s requirements and experience, but in general most artists completed a series of tasks via eye gaze, followed by head tracking, and finally with mid-air gesturing.

The eye and head tracking sessions initially involved artists having a play with a simple interface that allowed them to perform basic shape transformations via large icons. This was intended as an easy and introductory task to start familiarising artists with using their eyes to control systems. They then attempted a series of straightforward tasks within Photoshop such as moving a square from one side of the screen to another, selecting tools from the toolbar, creating a square with specific dimensions, and performing multiple transformations on a shape (e.g. resize, rotate, and re-positioning).

For the mid-air gesturing session, artists were asked to use a 3D sculpting application and explore how to use the menu system and the range of tools available for sculpting via mid-air hand gestures. Once all tasks were completed we spoke with artists to gain their thoughts and perceptions around the technology and how they felt these types of tools might potentially impact on practice.

There were several themes that emerged through the evaluation sessions (although we’re still conducting a deeper analysis of the video footage and data collected):

Eye Tracking

The first step in using eye gaze tracking was to calibrate the artist’s eyes through a task that involved following a dot around the screen. The system was able to calibrate the eyes of all artists (including those wearing contact lenses and glasses) except for one – this particular artist wore glasses, so we attempted a calibration on multiple occasions both with and without glasses on, but the system was unable to accurately track the artist’s eyes.

A photo of Artist Beth Griffiths testing the eye tracking calibration
Artist Beth Griffiths testing the eye tracking calibration

In terms of the initial introductory tasks (prior to using Photoshop) artists were generally able to select the buttons available by looking at them and then pressing a switch (i.e. a large button placed on the table in front of them). The calibration seemed a little off towards the top-right of the screen for a few of the artists which is where the main navigation controls were located – when this happened, we generally tried to calibrate again to improve the accuracy – with mixed results.

Screenshot of the initial interface artists used
Screenshot of the initial interface artists used

For the Photoshop tasks, artists had significant difficulties in completing the exercises we had set. These types of tasks typically take a few seconds or less with a mouse and keyboard, but it wasn’t uncommon for some of the tasks to be completed in 30 seconds or more (or artists were unable to complete them at all). In particular, they experienced significant difficulties in accurately placing shapes in exactly the position they wanted (due to the sensitivity of the control and constant movement of eyes).

Artists also felt they had little control when drawing new shapes and attempting to perform transformations (e.g. rotation and resizing). This was largely due to the small handles used in Photoshop for manipulating shapes which are particularly problematic to select via your eyes. It was clear, therefore, that the design of the Photoshop interface was not well suited for eye-based interactions.

Head Tracking

The calibration process was simpler for head tracking in that artists simply had to be in the range of the webcam and the software would then detect their face in a few seconds. In general, artists found controlling the cursor via their head (and the EnableViacam software) much smoother. They felt that they had more control in selecting icons and creating/positioning shapes.

However, artists still struggled with some Photoshop tasks such as selecting the small handles on objects in order to perform shape transformations. Also, whilst many of the artists felt they had more control, these tasks still took significantly longer to complete than would typically be expected when using a mouse and keyboard.

It’s important to highlight that some artists struggled with head tracking more so than when using eye tracking – especially artists who had more severe forms of motor impairment and who had less control over their head movements. Moreover, several artists stated that they started to feel a little neck strain and on occasions had to stretch their body uncomfortably to reach items placed around the borders of the screen.

Photo of artist Tanya Raabe working on the Photoshop tasks using head tracking
Artist Tanya Raabe working on the Photoshop tasks using head tracking

Mid-Air Gesturing

This technology is clearly more applicable and usable for some artists over others. Artists with motor impairments, for instance, unsurprisingly had difficulty in holding their hand above the sensor in the required location for a prolonged period of time. These artists also had difficulties in performing different types of gestures which require a certain level of dexterity.

The hand structure of some artists seemed to result in the sensor having issues in recognising fingers and hands correctly which tended to result in a frustrating user experience. However, artists with more dexterity were able to hold their hand above the sensor and use the applications we were evaluating.

In terms of the Sculpting application artists were able to navigate the different menu systems and select different tools. They were also able to manipulate the digital materials to start sculpting using their fingers and hands to control the interface. It was clear from the session that this method of interacting with systems was new to all artists and will require a significant learning curve to become proficient in working this way.

Several artists commented on the potential of the technology and liked the fluid and dynamic ways in which the sensor enabled them to work (without having to apply pressure to a canvas).

Photo of Artist Pearl Findlay evaluating the Leap Motion sensor
Artist Pearl Findlay evaluating the Leap Motion sensor

Summary / Next Steps

It was clear from the sessions we ran – especially around eye and head tracking – that there are significant issues in “bolting” these technologies onto existing software. Most software has primarily been designed and optimised for mouse/keyboard interactions (and increasingly our fingers) – however, using your eye gaze or head movements to control a system is a completely different type of interaction. We therefore need new interfaces that better support people using these types of technologies for creating artistic work.

Whilst artists did find the tasks we set difficult to complete, they also highlighted how they could see potential in the technologies evaluated. Several artists mentioned the combination of multiple technologies as an exciting opportunity that could better support their creative process. For instance, using speech recognition to select different tools, eye tracking for making rapid selections, and other assistive tools (e.g. trackballs) for finer controlled work. Obviously the solutions developed would need to be tailored and adapted for individual artists, but the combination of tools seems a fruitful area for further work.

One really interesting point made by a personal assistant was around the importance of new assistive tools not just supporting creative work, but also all of the wider essential tasks that must be completed by artists (e.g. email correspondence, managing/editing budgets, ordering of supplies, managing contracts, researching on the web, etc.). These are tasks that many disabled artists struggle to complete as they are typically completed on a computer and can therefore become hugely time-consuming (thus eating into time for creative work). It is therefore crucial that we take a more holistic approach to building assistive tools for artists where both creative and wider practice activities are supported.

The use of new assistive tools to support wider practice activities is an area that has received no attention to date in the research literature and we have therefore decided to explore this area in more detail in the next phase of the project. We are focusing on how eye gaze technology – combined with OptiKey (open source eye gaze communication and system control software) – can be used to support artists in a wide range of tasks. We have given the technology to a number of artists to examine potential impact on practice over a longer period of time (working in their own environment).

These longer term studies are currently running at the moment and we’ll be providing more details in a future post.

Disabled Artist Workshops: Exploring Current Practice and Technology Evaluations

Over the last couple of months we’ve been working with disabled artists to explore their practice in more detail and to evaluate a range of different technology to understand better how it might influence their practice.

We’ve now completed the first round of workshops with artists which has proved to be a fascinating and insightful process and we just wanted to provide an overview of what we focused on in these meetings (including session structure, the technology tested, and the evaluation tasks we asked artists to complete).

Session Structure

We worked with twelve artists who had a range of different impairments including cerebral palsy, muscular dystrophy, multiple sclerosis, motor neurone disease, arthritis, and generalised dystonia – all of which affected their practice in various ways. The artists included painters, printmakers, sculptors, photographers, and illustrators across different career stages (emerging, mid-level, and established).

The sessions with artists initially involved speaking with them in more detail about their practice (along with examples of their work) and learning more about their process, the issues they regularly experience, and how they currently incorporate the use of digital tools into their practice.

This was followed by a technology evaluation session where artists were able to test a range of technologies to provide an initial sense of how these tools might influence their practice. The technologies included eye tracking, mid-air gesturing, head tracking, and the use of facial expression “switches”. We also discussed other technology and software that artists may not have use before – these included Finger Mouse, SteadyMouse, and others.

For each tool we asked artists to complete a series of simple tasks to enable them to become more familiar with the technology. We also adapted which tasks some artists were asked to perform to ensure they used tools which were most appropriate for their type of disability.

Eye Tracking

In the eye tracking session we evaluated the Tobii EyeX sensor – combined with Project IRIS software (for cursor control) – to explore how artists found this method of interaction. A calibration task was initially completed that consisted of following a dot around the screen to enable the system to be able to track eye gaze effectively.

We then asked artists to use a basic interface (image below) to create multiple squares and then position them over the corresponding grey shapes displayed in the background. This involved artists having to create three shapes and then move, rotate, and resize the squares using the appropriate shape manipulation tools (on the right-side of the application).

A screenshot of the interface used in the eye gaze shape manipulation task

To select a tool artists needed to look at it and then use a switch (the black button in the image below) that simulated a mouse click. So, for example, to create a new shape the user would look at the rectangle icon in the top-left of the interface, and then press the black button to create a square on the screen. They could then select the controls in a similar manner on the right-side of the screen to move the shape around or to resize/rotate it.

A photo of the experimental set up which included a monitor with the Tobii EyeX attached - along with two different swtiches

This task was primarily intended to allow artists to become more familiar with using their eyes to control graphics – we then wanted to see how they found using more traditional and common software such as Photoshop. For this setup, we introduced another switch (the green button in the image above) to provide users with a left mouse button click and a “drag” button.

So, for example, artists could look at a shape in Photoshop, press the “drag” button to pick it up, look to where they’d like to move it, and then hit the “drag” button again to drop it in the desired place.

We also introduced artists to the zoom feature that is included with the Tobii EyeX package. This tool is designed to make selections easier (especially for small items) and initially involves a user looking in the approximate area where they’d like to make a selection – a zoom lens is then displayed on a button press providing users with the potential to select smaller interface elements (see below for an example).

A screenshot showing how the magnifier works with the Tobii EyeX sensor

The tasks we set within Photoshop included moving a square from one side of the screen to the other, selecting different icons/tools, creating a new rectangle with a specific height/width, and then combining different transformations of a shape (e.g. moving, resizing, and rotating).

These tests were intended to explore how people found using the Tobii EyeX sensor out-of-the-box with existing software and to examine if this was a viable option for artists. Whilst the tasks sound very simple (and they are if you’re using a mouse/keyboard) they were not as easy to complete as you might expect (we’ll provide specific details in future posts).

Head Tracking

In this session we primarily evaluated EnableViacam – a piece of software which can track head movements via a webcam thus enabling the mouse cursor to be controlled with head movements.

We asked artists to complete the same tasks as used in the eye tracking session to again enable us to better understand the potential of the technology and how the interaction differed from eye tracking.

We also completed some evaluation work with KinesicMouse – a tool that uses the Microsoft Kinect v2 sensor to track head movements and facial expressions. This provides far more possibilities than tracking with EnableViacam and a webcam as it enables users (for example) to control a system using eye brow raises, puckering of lips, frowning, smiling, and a whole range of other facial movements.

There are also numerous of options for configuring the system to an individual’s own specific requirements.

Mid-Air Gesturing

We used the Leap Motion sensor to investigate the potential of mid-air gesturing to support artists. This device can track hands, fingers, and pointers in “mid-air” and thus enables people to – for example – paint using their finger in mid-air or to sculpt digital materials using mid-air gestures.

To give artists an opportunity to become more familiar with using the sensor we asked them to have a play with the Playground application (that’s shipped with the device) which provides visualisations of hands when placed above the sensor, the ability to pick up objects (and throw them around), and also to pick petals off flowers (image below).

These were slightly gimmicky tasks, but a fun way to initially show how the sensor works.

A screenshot showing how a virtual hand picking petals off a flower - controlled via the Leap Motion sensor

We then asked artists to use the Sculpting application which enables users to sculpt digitally (using a range of materials) and then export them as models which can be 3D printed. The application also provides novel menu systems (e.g. for selecting different tools, etc.) designed with mid-air gesturing in mind, so we gave artists the opportunity to play with the technology and evaluate the different features.

A screenshot of the Sculpting application

Case Studies & Longitudinal Studies

These were the primary technologies we evaluated with artists and there were numerous interesting findings from the sessions (which we’ll provide an overview of in a future post).

We’re also now planning the next phase of the project which will involve longer-term evaluations with one of the technologies. Artists will be given the opportunity to take some kit home with them and use it in their own working environment over a longer period of time.

Again, we’ll be providing more details around this work in the near future.

Disabled Artist Online Survey: Early Findings & Emerging Themes

Over the last six weeks we’ve been carrying out an online survey with physically impaired visual artists to help better understand their current practice and experience with assistive technology. We’ve had around 35 responses to date and we thought it would be interesting to provide an overview of the responses we’ve received thus far and to highlight some of the key themes that are emerging.

The survey has been completed by artists from a diverse range of disciplines including painters, illustrators, printmakers, clay and cardboard sculptors, eye tracking artists, and digital photographers. There’s also a mix of artists in terms of career stage ranging from early career through to more established artists (ages of respondents vary from 20-74).

The artists reported having physical conditions such as multiple sclerosis, motor neurone disease, generalised dystonia, muscular dystrophy, cerebral palsy, arthrogryposis, quadriplegia, and multiple joint arthritis.

These conditions have affected practice in a variety of ways – some have adapted their practice over time as a degenerative condition has restricted motor control or when a sudden and unexpected disability has forced complete changes in creative work. Others have limited options remaining due to the severity of their disability and can only be active with certain artforms (e.g. the use of eye tracking to control graphical software).

A common theme is that people can only work for short periods of time before experiencing fatigue and needing a break. Several artists also work on smaller scales when their movement restricts the ability to work on larger canvases (e.g. using a smaller screen for detail work alongside a larger screen for overall perspective). Assistance is another key theme – many artists are reliant on other people setting up tools, making any adjustments, and being on hand to move materials around.

Some artists have to sit/lay in uncomfortable positions to enable better use of their upper body (where motor issues are a problem) – unfortunately, whilst this can make the artistic process more accessible, for some it can also lead to further health complications.

An interesting finding is that a significant majority of respondents are not currently using any form of assistive technology to support their working practice. Those who do are using trackballs, eye tracking technology, wheelchair accessories (e.g. for holding cameras), and motorised easels.

It’s important here to clarify what is meant by the term “assistive technology” – there are lots of definitions around and arguably all technology is assistive (as it typically attempts to assist people in completing some task). However, in the context of this work, we define it as a physical or digital tool that supports physically impaired artists in their practice.

So, examples of physical and more traditional tools include head wands, mouth sticks, and specially designed grips for holding brushes. Digital assistive tools include technology such as eye tracking, mid-air gesturing, facial expression tracking, brain-computer interfaces, head tracking, and a range of other tools. We exclude standard software from this definition (e.g. Photoshop) unless it has been designed specifically with assistive interactions in mind.

We were expecting more artists – especially those with some form of severe physical impairment – to be using some of these tools (especially eye/head tracking), but surprisingly it seems that the majority of artists are unaware of the range of digital assistive tools that are available to them that could help support their practice.

These are just some of the themes beginning to emerge from an initial informal analysis of the responses collected. It’s worth noting, however, that we’re still looking for more respondents – in particular, we’re especially keen to hear from younger or early career artists, people who are currently using digital assistive tools, and artists using more traditional assistive equipment (e.g. specially designed grips for brushes – or anything else broadly relevant).

If you haven’t completed the survey yet, please take 10-15 minutes of your time to let us know more about how you currently work. Or, if you know someone else who might be interested, please forward this onto them.

The survey can be accessed at: http://goo.gl/forms/HVwz4Jklol

The next stage of the project will explore the current practice of some of the artists in more depth enabling us to delve into further detail about their work and to start testing out some new technologies (alongside collecting/analysing more survey responses). We’re currently arranging dates with artists for some time in August and will again report all early/key findings on the site.

Head Tracking Technology for Disabled Artists

Head tracking technology holds much potential for physically impaired artists. It has dropped significantly in price over recent years and is relatively easy to install and set up. It also has some useful advantages over related technologies such as eye tracking in that there multiple facial muscle movements that can be accurately tracked in real-time.

This means gestures such as blinks/winks, eye brow raises, lip movements, puckering of lips, chin raises (and more) can all be tracked and used to interact with the system. This can be incredibly useful for disabled users as it enables individuals to configure the tracking system to better suit their own specific and unique requirements.

There are both free and commercial head tracking solutions currently available. KinesicMouse, for example, is an application that uses the Microsoft Kinect v2 sensor to track head movements. This software can be used to assign around 50 facial expressions to trigger different keyboard, mouse, and joystick controls.

This is a nice solution as you only need to place the Kinect sensor in front of you – no additional tracking aids such as reflective stickers or hats are required. KinesicMouse costs around £250 for a single license (although there’s a free trial available if you want to test it out) and a Kinect sensor is also required which costs around £120.

However, there are cheaper options available which typically work via a webcam – this is advantageous as many computers and monitors typically come with built-in cameras – or external cameras can be purchased for under £10 (if your machine doesn’t have one already).

You’ll also need some software to make it possible for a webcam to track head movements and there are a range of FREE applications available – for instance, EnableViacam and Camera Mouse can track head movements and allow you to control the mouse cursor in Windows. It is therefore now possible to use head tracking technology at no additional cost (assuming you already have a webcam) – so why is it not more widely used amongst disabled users/artists?

I think a key problem is that whilst this software makes interacting with Windows and other applications possible and more accessible, it can also be a particularly frustrating experience (especially when trying to perform simple actions such as double clicks or dragging of items).

The core issue is that this type of software attempts to make head tracking technology fit with existing and traditional interface paradigms. However, these interfaces have normally been designed with mouse and keyboard interactions in mind which provide a much finer level of control over head tracking.

For example, it’s relatively simple to select small icons or links from drop-down lists using a mouse (or keyboard) whereas this can be more problematic via head tracking. Attempting to perform basic actions can become very tedious which in turn can lead to people abandoning the technology through sheer frustration.

To address these types of interaction issues we need to design applications specifically optimised for the technology (and a user’s needs) to ensure that we have interfaces that better support head tracking interactions. There’s very little work in this area – especially in terms of creating new tools for disabled artists, but there are huge opportunities to address this now.

In particular, software development kits and libraries such as the Microsoft Kinect SDK and TrackingJS make it easier than ever for developers to create head tracking applications. We are particularly keen to explore this area in further detail during the D2ART project and are working with disabled artists to better understand the requirements for head tracking interfaces.

EyeArt: Painting with Your Eyes

Eye tracking is one of the key technologies that we’re interested in exploring further on the D2ART project. This technology has been around for a while now, but it has always been expensive typically costing thousands for both the hardware and software.

Tobii, for instance, offer a range of assistive products that make it possible to interact with computers and tablets via your eyes. Unfortunately the high price of this innovative technology has not always made it easily accessible and feasible for everyone.

However, the price of eye tracking technology has dropped dramatically over the past few years. Take the Tobii EyeX sensor (below) – this new sensor costs around £90 ($139) and allows you to easily add eye tracking capabilities to your computer or tablet.

An image of the Tobii EyeX device

The sensor also comes with software that enables you to operate Windows using only your eyes and a specified key on your keyboard (for making selections). There are other options out there as well such as the Eye Tribe sensor which costs $99 and allows for interaction with systems via your eyes.

A crucial point is that both of these sensors provide a software development kit (SDK) which enables developers to start building custom eye tracking applications (e.g. check out the Tobii EyeX Development Kit).

This presents huge opportunities for developers to create assistive applications that better support disabled users. This is essential as while these sensors can make it possible for people to use existing software, popular applications (e.g. Word, Excel, Spotify, games, etc.) are not typically designed with eye tracking in mind.

This can cause a range of interaction issues – most software is designed based on the assumption that people will be using a mouse or keyboard (or their fingers). This, however, is unsurprisingly a very different type of interaction compared with using your eyes to control a computer.

For example, attempting to select small icons with your eyes can be particularly difficult. There’s also the well-known “Midas Touch” issue in eye tracking interfaces – that is, if you’re using your eyes to operate the system, how can the system determine between when you’re passively viewing something (e.g. looking at some artwork) versus when you actually want to perform an action (select a button, flick a brush, etc.)

There are some potential solutions around these issues, but it can become a real problem when using an application such as Photoshop with “out of the box” eye tracking technology. This type of application has lots of small icons, drop down menus, and other interface widgets which can make the interaction particularly frustrating and irritating.

These new SDKs present developers with the opportunity to build bespoke software that can help to address these types of interaction issues. We can now start to design and build applications (in collaboration with disabled users/artists) to ensure that they better support their needs and requirements.

Affordable eye tracking technology is an exciting development that holds much potential in the disability arts space. There are artists who already use this technology to facilitate the production of their work (check out Sarah Ezekiel’s website), but there’s still a long way to go to create digital tools that more effectively support creative activities.

Mid-Air Finger & Hand Painting for Disabled Artists

There are numerous new opportunities available for disabled artists to create their artwork due to the recent emergence of several affordable innovative technologies. This includes the ability to now paint in mid-air through using your fingers, hands, and mouth/head sticks.

This is an area we explored in collaboration with DASH (prior to the D2ART project starting) where we evaluated the use of the Leap Motion sensor (image below) that allows users to control systems using their fingers (or a pointer) via mid-air gestures.

An image of the Leap Motion controller

To develop a better sense of the device’s potential for disabled artists, we tested three Leap Motion applications (from the App Store) that allow users to create digital art via mid-air gesturing:

Photoshop Ethereal: This application allows users to interact with Photoshop using mid-air gestures and has some unique features developed specifically for the Leap Motion sensor (e.g. digital brush pressure control via mid-air gestures).

Sculpting: This is a 3D sculpting and modelling application that allows users to create different objects using mid-air gestures. It also includes new menu designs that have been created with mid-air gesturing in mind.

Leap Motion Orientation: This application is included with the sensor installation package and provides the ability to create basic and simple brush strokes.

We conducted an initial informal evaluation with two disabled and two non-disabled artists. One of the disabled artists has arthrogryposis and the other is in the early stages of multiple sclerosis. The Leap Motion sensor was placed on a table in front of a wall where the applications were projected (image below) and the artists were asked to share their thoughts as they used the technology.

A photo of a disabled artist using the Leap Motion sensor to control Photoshop

There were several key findings:

Hand Structure
The hand structure of the artist with arthrogryposis resulted in multiple points being detected by the sensor (as opposed to a single finger) which created a frustrating interaction experience as the cursor would jump around the screen. A rolled up piece of paper was used as a makeshift brush which the artist could hold and use more effectively.

Firmness of Stroke
Determining the firmness of stroke was problematic and it was felt that some form of calibration for setting the firmness would help. The digital canvas also had no depth or movement which felt strange to the artists (thickness of canvas has implications on the artist’s strokes).

Photoshop Interface
Navigation of the (Photoshop) interface was an issue and attempting to select different tools was often frustrating due to the size of icons and attempting to perform the “click” mid-air gestures. It was also difficult for all artists to get a sense of where the brush was located on the screen (especially when starting a new session).

Continuous line drawing and brushing was challenging – the brush would often come away from the canvas which would result in unintentional blank spaces. Painting with mid-air gestures on a digital canvas also lacked the pencil and brush reactions you would get on paper or traditional canvas (i.e. the process of mid-air painting felt strange).

Opacity Control
One of the disabled artists found that using opacity control could simulate brushwork (via the use of mid-air gesturing), but commented that it was not nearly as responsive as using a Wacom or Intous tablet.

In summary, it was clear from the evaluations that the current design of the Leap Motion applications tested were not well suited for artists with physical impairments. Photoshop, for instance, had too many options and it was difficult to select different tools due to the small size of the icons and difficulties in performing the appropriate selection gestures.

However, this is not surprising given that these applications were not designed for disabled artists. Simply bolting mid-air gesturing technology (or other interaction technologies) onto existing software such as Photoshop is always likely to produce a frustrating and tedious user experience for disabled artists.

We need to take a different approach – we need new interface designs that are developed and evaluated in collaboration with disabled artists to ensure these tools can help support and extend practice. This will be a key focus for the D2ART project moving forward.

Additional Note
This work was published and presented at the ASSETS2014 conference in New York (click on the link below for further details):

Creed, C., Beale, R., & Dower, P. (2014). Digital tools for physically impaired visual artists. In Proceedings of the 16th International ACM SIGACCESS Conference on Computers & Accessibility (pp. 253-254)

Disabled Artist Online Survey – We Need Your Help!

The first phase of work in the D2ART project involves developing a deeper understanding of how disabled visual artists currently work. To help achieve this we’ll be running some interactive workshops over the next couple of months where we intend to speak in-depth with some artists.

In addition to the workshops, we’ve also created an online survey to help gain an international perspective on how artists work – the survey introduction provides more details:

“… as an initial step we need to gain a deeper understanding around how disabled artists currently work and how new technologies could enhance accessibility and the creative process. We are also looking for artists who might be willing to work with us on developing and testing new technologies in the future (although completing this survey doesn’t commit you in any way).

For this survey, we are particularly interested in getting feedback from artists with physical impairments whose primary art form is in the visual arts (with a particular focus on painting, drawing, and digital art). We are also looking for artists who currently use some form of assistive tool/technology to assist them in their work – for example, artists who use non-digital tools (e.g. custom-made grips for holding brushes, head sticks/wands, etc.) or digital tools (e.g. eye tracking technology, speech input, custom-designed keyboards, etc.).

The survey should take no longer than 10-15 minutes to complete. Any information you provide will only be used in this research project and will not be used for marketing purposes or passed onto any third party. If you have any questions about this survey (or wish to have your data removed at a later date), please contact Dr Chris Creed (University of Birmingham) at creedcpl@bham.ac.uk.”

Click here if you’d be willing to help out (or if you know anyone else who might be interested). We plan to disseminate the key findings from the survey on this website over the next couple of months.

Thanks for your help!

D2ART – Digital Tools for Disabled Visual Artists

Artists with physical impairments typically have great difficulty and numerous obstacles they must overcome when working on their art. Current assistive tools such as head wands, mouth sticks, and custom designed grips can help make the artistic process more accessible, but they often result in unnatural movements consistently repeated on multiple occasions. This, in turn, can lead to other physical issues such as severe neck strain and damage to teeth.

Physically disabled artists are also often highly dependent on the assistance of other people to help with setting up a canvas, paints, brushes, and anything else required. Moreover, if any adjustments are needed after the initial setup, support workers or carers need to be available to assist the artist. This lack of independence and reliance on other people can result in a frustrating and tedious experience for disabled artists that disrupts their creative process.

A new opportunity has recently emerged with the release of several innovative and affordable sensors that have the potential to transform how people with physical disabilities interact with computing systems. Sensors and devices such as the Microsoft Kinect, Leap Motion, Touch+, Tobii EyeX, and Eye Tribe can accurately track body and eye movements in real time enabling people to interact with systems in new ways.

D2ART is a new AHRC funded project that will look to address the issues disabled artists currently experience through exploring how these novel digital technologies can help support and extend an artist’s practice. These types of devices hold much potential as assistive tools, but no studies to date have explicitly examined how they can be used to create digital tools that support disabled artists.

For instance, they can make traditional art forms that are currently difficult or impossible for physically impaired artists to participate in (e.g. sculpting for double amputees or people with severe arthritis) more readily available and accessible in digital form. This gives rise to new hybridised art forms (e.g. digital sculpting via mid-air gesturing, 3D printing of digital models, etc.) that have received no attention to date in the context of disability arts.

The use of these tools raise numerous important and timely arts and humanities research questions around their impact on practice, visualisation of the creative process, artistic identity, perceptions of authenticity, and audiences/artists’ broader perceptions of work.

The D2ART project will specifically address two key research questions:

1. How can innovative and affordable sensor based digital tools support, extend, and transform the creative practice of physically disabled artists?

2. Which new art forms emerge from these digital tools and what impact do they have on artistic identity and audience/artist perceptions of authenticity?

Our longer term goal is to develop a suite of custom designed digital tools for disabled artists and to research the impact they have on creative process and output over time.

The D2ART project will enable us to start working towards this goal through (1) enabling us to better understand the current practice of disabled artists and the issues they experience, (2) pilot testing a variety of digital tools and their impact on practice in longitudinal user studies with professional artists, and (3) building an international cross sector/discipline network to discuss key themes in the field and to set the research and funding agendas over the next 5-10 years.

This website will be used to disseminate our progress throughout the project and to report on key research findings. You can also follow our progress on Twitter at: @D2ArtDigital.