The obvious Solution to Multitasking on a Touchscreen

Without reverting back to windows

Andreas Stegmann
hyperlinked

--

Tablets like the iPad are a combination of very capable hardware with very incapable software, we’ve discussed this already.

A big chunk of the letdown of iPadOS comes down to Multitasking — the software interaction system to handle multiple jobs at the same time.

We all had high hopes for the new Multitasking system in iPad OS 15. And while it’s the most refined version yet, it hasn’t changed or improved significantly since the introduction of the ability to splitscreen in iOS 9. Apple Insider dubbed it fittingly “More of the same”.

Dieter Bohn argues it’s because of mixed metaphors:

I think the way the iPad handles windows and files and multitasking is not intuitive, by my particular definition of the word. I think the root of the conceptual confusion is that the user interface mixes both spatial and temporal metaphors.

I rather think it’s that you can’t trust the spatial windows to stay where you put them. On Windows™ you drag a window in the bottom right corner and know for sure that it will stay there until it gets moved, minimized or closed.

A windows-based operating system adds a frame aka “chrome” to each window. The chrome delivers two important functions: 1) It offers space to grab and drag the window to move it around. 2) It hosts visible options to manipulate the window, like a red X to close it.

Now, I’m very much for a redo:

  • chrome takes away space for showing the “content”.
  • chrome offers maybe too much options to manipulate content — no one needs their window exactly one pixel to the left.
  • But importantly, chrome adds “frames of interaction”.

What are frames of interaction you ask?

The more interface you have visible, the higher the cognitive load on the user. When parts of that interface belong to entirely different, unrelated frames (or levels) of interaction, the load is high.

Comparing the cognitive load in different frames of interaction

Multitasking is a lie. It’s a lie because nearly everyone accepts it as an effective thing to do. Gary Keller

This results in unneccessary cognitive load — and therefore unneccessary complexity. Lennart Ziburski:

Overlapping windows as an interface metaphor were invented over 40 years ago with the Xerox Star. Since then, the amount and complexity of how we use computers has increased dramatically. Windows are now inefficient and incompatible with modern productivity interfaces.

Yes, the User Interfaces we use right now are that old (Jeremy Reimer once wrote a nice historical rundown). Surely after numerous advances in the underlying technology, there is something precisely tailored for todays devices? Something that excepts the human hand as input device instead of the the very small pointer of a mouse?

Let’s take a look at some of the concepts out there made by indie UX designers. For example the concept from Daniel Korpai pushes open apps to the side instead of overruling them. This feels much better to me.

I would go further and get rid of the option to have apps on top of other apps (like done with Slide Over). Let the 3D hierachy be its own guide. Only certain OS functions should be available behind or above the apps “layer”.

Therefore I still think the linear metaphor by Clayton Miller’s 10/GUI holds the most potential for touch/gesture-based UIs.

Note how every browser tab sits on the top layer instead of being hidden behind the thumbnail for the browser app. The right choice in an age where web apps are often as powerful as desktop apps. (Another detail that webOS got right.)

Fittingly, his concept is just as old as the iPad. There’s no excuse for an UI Designer to not know it by now. Actually a lot of them do and added their own sprinkle and useful features.

Here’s the aforementioned Lennart with his Desktop Neo.

Instead of a dock with icons the user sees the content directly.

He also added an option to Pin or to Minimize windows in place. Think of it as Pro-features.

Kévin Eugène made use of the gesture bar introduced with the iPhone X and used it as a slider:

When apps are fullscreen, swiping at the bottom behaves exactly like on the iPhone.

He also brought the “flow” concept back to the Mac. It’s an adaptation of the fullscreen four fingers gesture swipe that I use constantly when I’m on my 13 inch MacBook screen.

These are all indications that one, continuous app layer without unneccessary nesting that scrolls horizontically would make sense for simple and complex use cases.

It shows that Apple (or any other OS maker) could innovate while bringing the Tablet and the Desktop even closer together, too. I push back on the notion that desktop operating systems have it all figured out and we should leave them like they are.

I encourage you to check out the different concepts in detail. They show nifty details, like the saving of a specific layout to quickly switch between scenarios:

Since October 2020 Kévin works directly for Apple. Hopefully on such groundbreaking stuff.

I made this post to show that there is a middleground between “windows mode” on the one hand and let’s call it “splitscreen mode” on the other. Apple wouldn’t need to invent the perpetuum mobile, they just would need to do what they always do: Taking what’s already out there and combine it in a nice little package.

I’m a knowledge worker, I need the capabilities that an advanced multitasking system brings to the table, while every second saved not hunting for the right window or tab saves years in the aggregate.

--

--

Andreas Stegmann
hyperlinked

👨‍💻 Product Owner ✍️ Writes mostly about the intersection of Tech, UX & Business strategy.