Last winter was my first with a dog in NYC, and that meant walks, lots of cold walks.
It didn't matter how cold or blizzard-like the conditions were, he still needed to go outside and me with him. This also made for some hilarious dog-romping-in-snow moments. I attempted to photograph these moments with the use of a fancy pair of touchscreen friendly gloves, so my fingers wouldn’t freeze and fall off.
My problem was that I could not operate the iPhone’s camera while using my fancy gloves to capture these adorable snowy dog moments. The gloves were too clumsy and inaccurate to register my taps on (or near) the capture button. By the time my futile, multiple tap attempts finally did take the picture the moment was gone and I ended up with a blurry photo.
Why do we even need this tiny capture button anymore?
Camera triggers used to engage a mechanical mechanism to release and close a physical shutter, exposing a section of light sensitive material to the outside world. I know my iPhone doesn't do this but yet we keep the trigger the button.
Let’s take a second to consider the primary purpose of the Camera app — to capture a picture (or video). Sure there are secondary actions related to focusing, flash, grids, and HDR but the action a user completes 99% of the time is simply capturing a photo. So why do we make users tap this tiny button to complete their task?
Why can’t they just tap anywhere on the screen to capture a picture? This would offer multiple benefits to everyone.
Its makes the action faster.
The user doesn't have to change their focus from framing the photo to making sure their finger hits the button. By tapping anywhere, we remove a cognitive step thus allowing them to keep their attention on framing the photo.
Its more accessible.
Users with movement limitations in their hands may have difficulties in tapping the button, similar to what I experience with my gloves on. Allowing users to tap anywhere on the screen removes this problem.
Its more supportive of ambidextrous use.
Users don’t have to think about what hand they’re holding the phone with and what side the button is on. One-handed operation becomes even easier, allowing the user to use any hand and in any orientation.
Its clears up screen real estate
Lastly, it removes the largest UI element from the screen allowing more of the image to be displayed without disruption. For the iPhone, we can keep all of the other actions and buttons and simply remove the big white button. This would also allow for a better landscape orientation UI structure. Currently, the different capture types: photo, video, square are displayed sideways in landscape orientation, making them very difficult to read.
But what about controlling focus?
Yes, currently a tap sets the focus point. So if we use a tap to capture the photo, then we could just use a tap+hold to set the focus point. This further supports a faster capture interaction. The user can tap+hold on the point in the scene they want to focus on, release their finger, then bring their finger back down in the same spot to take the photo. No longer would the user need to shift their attention between the framing, the placement of their finger and the capture button.
But won’t people be confused if there isn't a button?
Yes having an explicit capture button may make the interface more initially intuitive, but tapping anywhere on the screen is extremely learnable interaction. With some simple on-boarding the first time they use the camera and users would have no problem remembering that all they need to do is tap anywhere to take a photo. Hell, even if they don’t pay attention to the onboarding, they’ll figure it out because it’s so simple.
But nobody else is doing it, right?
Android has already started exploring this idea. The Moto X’s native camera app doesn't have a capture button. There are also a few video apps (Vine, Instagram and Spark) who are exploring a similar ‘touch-anywhere to record’ interaction as well.
Sometimes it’s good to question historical assumptions and affordances to consider interactions that will truly support all users, including those with physical limitations. Using the camera on my iPhone is probably 50% of what I use the phone for. Anything Apple can do to make my interactions with the camera faster and easier would be welcome. I believe a good solution for this is to remove the archaic capture button and adopting a more ‘direct-manipulation’ UI.
If we had a capture method that was more accommodating of broader touch points, then there’d be more photos of puppies romping around in the snow across the internet, and who doesn’t want that?