Technology can be amazing and can change the world in positive ways – take breakthroughs that have been made in medicine that save lives, for example or new developments in industrial automation that save us from having to risk our lives doing dangerous jobs or just waste them on routine and mundane activities.
However, it can also be scary – whether it’s worries about the privacy implications of computers and the internet or more existential fears such as robots taking over the world and damaging – creating emissions and pollution.
Sometimes, however, that fear and uncertainty are simply caused by a lack of understanding. This isn’t always our fault, as new technology is often first introduced to us by marketers or salespeople who are more interested in selling it as a solution to our problems than explaining exactly what it is and what it can actually do in reality!
So here’s a look at five breakthrough developments in technology that have emerged into the mainstream in the last decade or so. In my experience, most of them are still not properly understood and can cause a lot of misconceptions! So I’ll try to give a super-simple explanation of what each of them actually is, as well as clear up some of the common misunderstandings I come across!
Artificial Intelligence (AI)
This is perhaps the number one most commonly misunderstood technology and also one which causes a fair amount of anxiety! I’m certainly not saying that it isn’t a cause for concern and that anyone seeking to use it shouldn’t be cautious. But it isn’t about building robots that will one day take our jobs or our planet!
The term “artificial intelligence,” as it is used today in technology and business, usually refers to machine learning (ML). This simply means computer programs (or algorithms) which, rather than needing to be told explicitly what to do by a human operator, are capable of becoming better and better at a specific task as they repeat it over and over again and are exposed to more data. Eventually, they may become better than humans at these tasks. A great example of this is AlphaGo, a machine intelligence that became the first computer to beat a human champion at the game of Go. Go is a game in which there are more possible moves than there are atoms in the universe. This means it would be very difficult to program a computer to react to every possible move a human player might make. This is how conventional, programmatic games-playing computers, such as chess computers, work. But by teaching it to play Go and then try different strategies until it won, assigning higher weighting to moves and strategies that it found had a higher chance of success, it effectively “learned” to beat a human.
Until a decade or so ago, most people’s understanding of AI came from science fiction, and specifically robots as seen in TV shows and movies like 2001, The Matrix, or Star Trek. The fictional robots and smart machines in these shows were generally shown as being capable of what we call “general AI,” – meaning they could have pretty much all of the facets of natural (human or animal) intelligence – powers of reasoning, learning, decision-making, and creativity – and carry out any task that they needed to do. Today’s real-world AI (or ML) is almost always what is known as “specialized” (or weak/narrow) AI – only capable of carrying out the specific jobs it has been created for. Some common examples of this are matching customers with items they might want to buy (recommendation engines), understanding human speech (natural language processing), or recognizing objects and items when they are spotted by cameras (computer vision).
Most people can be forgiven for this one. Gaining a low-level understanding of quantum computing generally requires knowledge of quantum physics which is beyond anyone who hasn’t studied the subject academically!
However, at a higher level, there are also a lot of common misconceptions. Quantum computers aren’t simply computers that are much quicker than regular “classical” computers. In other words, quantum computers won’t replace classical computers because they are only better at a narrow range of very specialized jobs. This generally involves solving very specialized mathematical problems which don’t usually come up as day-to-day business computing requirements. These problems include simulating quantum (sub-atomic) systems and optimization problems (finding the best route from A to B, for example, when there are a lot of variables that can change). One area of day-to-day computing where quantum computing might supersede classical computing is encryption – for example, securing communications so they can’t be hacked. Researchers are already working on developing quantum-safe cryptography because there are fears that some of the most advanced cryptographic protection used for security at government level could be trivially defeated by quantum computers in the future. But it won’t let you run Windows faster or play Fortnite with better graphics!
The first place many people would have heard the term “metaverse” would have been the 1992 dystopian sci-fi novel Snow Crash by Neal Stephenson. And when the concept went mainstream in 2021 following Facebook’s change of name to Meta, numerous articles linked it to ideas found in the virtual reality (VR)-focused novel-turned-movie Ready Player One. But in fact, the concept as it relates to technology today isn’t necessarily exclusively about VR. And hopefully doesn’t have to be dystopian!
The fact is that no one yet knows exactly what the metaverse will look like, as it doesn’t exist in its final form yet. Perhaps the best way of thinking about it is that it encapsulates a collection of somewhat ambiguous ideas about what the internet will evolve into next. Whatever it is, it’s likely to be more immersive, so VR, as well as related technologies like augmented reality (AR), could well play a role in it. However, many proto-metaverses and metaverse-related applications, such as the digital game platform Roblox or the virtual worlds Sandbox and Decentraland, don’t yet involve VR. It’s also likely to be built around the concept of persistence in a number of ways – for example, users are likely to use a persistent representation of themselves, such as an avatar, as they move between different virtual worlds and activities. Users will also expect to be able to leave a virtual world and come back to it later to find they are still in the same “instance” – which is not the case in, for example, the virtual worlds that many people are used to exploring in video games, where the entire world might be reset when a new game is started.
Once it is a part of our lives, it’s possible that we won’t even call it the metaverse at all – just as no one really uses the term “worldwide web” anymore. This is nicely illustrated by Apple CEO Tim Cook saying he doesn’t think the idea will catch on because “the average person” doesn’t really understand what it is. However, he does believe that individual technologies that are part of the metaverse – such as AR and VR – will be part of the internet’s evolution.
Web3, as it is most widely used today, refers to another idea for the “next level” evolution of the internet, but one which is tied to concepts involving decentralization, blockchain technology, and cryptocurrencies. This is confusing because another group of ideas exists, which is labeled “web 3.0”, proposed by Tim Berners-Lee – the man often referred to as the father of the World Wide Web. As with the term “metaverse,” both web3 and web 3.0 refer to what the internet may evolve into. And although the ideas are somewhat related and not necessarily mutually exclusive, they each describe different things! Confused? Don’t worry, so is everyone else!
Specifically, though, web3 looks forward to an internet where power and ownership aren’t centralized in large corporations that ultimately own the servers where data is stored, and software programs are executed. For example, many believe that large social network companies like Facebook and Twitter hold too much sway over public debate as, ultimately, they get to control who does or doesn’t have a voice. A decentralized web3 social network would, in theory, be controlled by its users and operate as a true democracy, with no Mark Zuckerberg or Elon Musk figure with the capability to cut off anyone who they didn’t think should have a platform.
A metaverse-oriented internet could be run on web3 principles – decentralized – but wouldn’t necessarily have to be. Likewise, a web3 internet could be organized as a metaverse (with immersion and avatars as key features) but, again, wouldn’t have to be. Hence the ideas are compatible visions for what the internet could become but are not necessarily related.
The arrival of a new generation of mobile internet technology has brought with it its own fair share of misunderstanding. This includes concerns about its possible impact on health. Many people were worried that high-power radio waves emitted by phones or transmitter masts could lead to health problems, including cancer. However, hundreds of studies carried out around the world by governments and independent research organizations have failed to turn up any evidence that this is true.
It’s also a common misconception that 5G is a singular piece of technology or standard that was implemented, and now we are just waiting to see the results, which will mainly be faster internet on our phones. In fact, 5G is an evolving standard. Most of the infrastructure in place today relies on a slower form of 5G which effectively “piggy-backs” on the existing 4G LTE infrastructure. True, “stand-alone” 5G is gradually being rolled out, which will enable it to reach its full potential in the coming years. This will include enabling many more users to connect within a limited physical geography, such as a shopping mall or sports stadium, in theory eliminating the connectivity problems that often occur in densely populated locations. The real potential of 5G internet is not merely faster data transfer but a mobile internet that allows us to transfer new and exciting forms of data in different ways to create applications that do entirely new things.
To stay on top of the latest on new and emerging business and tech trends, make sure to subscribe to my newsletter, follow me on Twitter, LinkedIn, and YouTube, and check out my books ‘Tech Trends in Practice’ and ‘Business Trends in Practice, which won the 2022 Business Book of the Year award.