Touch Bar & Touch Screen, that’s it?
The last two days were not that spectacular as I hoped they would be. Apple and Microsoft both introduced new hardware. Microsoft did a smart move by placing its product announcement exactly one day before Apple’s one.
Microsoft introduced the Surface Studio. A stunning “iMac” like all-in-one PC with amazing hardware specs, great touchscreen functionality
The Touch Bar, as Apple calls it, is an additional touch screen panel which replaces your current F-keys column. The MacBook is in addition faster, thinner, and lighter but Apple’s MacBooks still lack a touchscreen. The main focus of yesterday’s keynote was definitely the Touch Bar. The Touch Bar might be a good step in the right direction but is this how innovation looks like? No, definitely not.
Even if Microsoft’s product launch of the new Surface Studio was (in my opinion) more spectacular and innovative than Apple’s keynote, the question I ask myself is: with what kind of device will we work in the future? Will it be a laptop and a PC or rather something different?
PC Innovation: What about AR, AI, and NLP?
While Apple is putting all their bets on MacBooks – which don’t even have touch screens – Microsoft is at least relying on accurate touchscreens in all their products. And Microsoft has another ace up their sleeve: HoloLens. While Tim Cook admits that augmented reality will have a severe impact on the way we live, work, and communicate Microsoft takes the leap. Microsoft has already the world’s best known augmented reality headset on the market. Isn’t it the augmented reality that will shape the way we work in the future? Isn’t the combination of voice recognition (NLP), artificial intelligence (AI), and augmented reality (AR) the future of working?
Typing and navigating through the operating system – let it be MacOS or Windows 10 – both take most of the time when working with computers. Why doing three or four mouse clicks when you cold accelerate the process with one word, thought or event without doing anything?
Let’s take the example of Photoshop which Apple used generously in yesterday’s keynote. Why do we still need to use keyboard shortcuts, the new Touch Bar, or do a few clicks to set up the brush size in Photoshop? Why don’t we simply say: “brush 20” and you can automatically draw with a brush in 20pt size?
When being in a certain phase of a document an AI assistant could recommend us the tools we need and want to use next. We could choose them by simply saying a word, tapping a few times on the table, keyboard, or screen, or simply looking at it.
As already mentioned, Microsoft is currently taking the leap in augmented reality development and testing. They already have a working HoloLens which has the potential to become the new workstation of our lives. If you ask who it is probably Microsoft who is coming up with the next big thing.
But until the augmented reality is ready for the mass market why don’t Microsoft and Apple embed some of the currently working technologies like voice recognition deep into their operating systems instead of on relying mostly on a Touch Bar or a touch screen?
The time of laptop and PC innovations is finally over and ripe for some real innovation. Who will come up with the next big thing? Microsoft, Apple or another disrupting startup?
What do you think, are touch screens and the newly introduced touch bar really simplifying your workflow? Isn’t perfect voice recognition in combination with AI more helpful? Let me know your opinion in the comments!