The world quantified becomes data, the data made sense of becomes information, the collated information becomes knowledge and the knowledge, thus scientific, when practically yielded becomes technology. Technology, as Bill Gates very rightly said, is a tool. Technology is not the answer, it, at its best, is an assistant in such a pursuit.
Lifetimes have been spent honing this tool and it has never been sharper than what it is today. It is, but, natural to marvel at all that technology could do and in future will. Technology has had many faces, from the earliest classical mechanics, to the age of electricity to today’s era of electronic intelligence.
Practically yielded knowledge becomes technology, becomes machine. Machines like the one used to print these words, like the one being used to read these words, have been helping humans simplify the rather insignificant tasks of their daily lives for ages now. Slaves to human masters, they have become inseparable entities of their existence. It then becomes innately imperative that the conversation between two unequals be smooth. An improvement in this conversation is what has progressed the use of wheels from being propelled by raw human effort to being propelled by a swift pressing of an effortless paddle.
The electromechanical companions of humans, our gadgets form the focus of our interest today. A tap on an illuminated brick, flicks of fingers on a confused block of letters, and commands in common tongue are some examples of how we interact with them. This shift of effort between men and machines has been tremendous.
The bright, uncertain future is awe-inspiring, and BufferedWriters bring to you a roadmap into the unknown. The machines of future, and the effortless modes of conversation compiled into three of the most promising and likely outcomes - The Wearables, The Drones and The Virtual Reality.
When James Cameron came up with the highly successful sci-fi movie ‘The Terminator’, it not only opened human imagination to the possibility of a cyborg, but also gave breathe to speculations whether we will be able to invent such devices ever. Some 30 years later, when Sergey Brin, co-founder of Google, publicly announced Glass in an event by Foundation Fighting Blindness, the question was no more will they, but instead how far they will. Wearables are promising to play a crucial role in the next generation of technology that even the most trending gadgets today might fail to find a place in your pockets in the next five years or ten. Imagine a world where you can virtually have your family right in front of your eyes, looking through a glass. A world where your wrist watch can measure your blood pressure and return a daily analysis of your workout, where your clothes would replace the necessity of a charger for your smartphone. The gadgets are not even part of a wishful sci-fi, they are very much the reality of today. Wearables have finally succeeded to bring technology to the closest proximity humans have ever succeeded, with projects undertaken as far as creating underskin chips.
Wearables are mostly indifferentiable from accessories whose design they seem to imitate, except the fact that they are actually intended to interact with the user without punching keys or other physical stimulation. Like most gadgets they too have the hardware such as a watch or a fabric and the information aggregator or analyzer. Information aggregator collects samples of data through which a user can interact with the wearable, say a voice command or through a photosensitive fabric. When the wearable is stimulated by the user, the aggregator evaluates the instructions in real time to execute the proper course of action.
It’s a fact lesser known that the technology behind wearables finds much of its history in attempts at cheating casinos. Some of the first wearables were introduced in 1960s and 70s in the form of a pair of shoes and a jacket that could be operated using toes to count cards in a game of black jack and improve a gambler’s odds at the roulette table roughly by 44%. However the technology remained dormant for the next few years apart from the outburst of calculator watches in the market in 80s. The technology was admirable, but on most ends wasn’t practical for the consumers, and with a barrage of controls, certainly not user friendly.
Wearables reached their most significant milestone when Dr. Steven Mann, the man credited as the father of wearable computing, came out with a backpack, integrated with MOS Technology’s 6502 microprocessor. The steel framed backpack could control photographic equipment and display it on the camera viewfinder attached to a helmet. He went on to invent the wearable wireless webcam, a device which clicked pictures from Dr. Mann’s daily life and uploaded it to the web, an act which led many to believe him as the first “life blogger”. Dr. Mann was awarded with a Ph.D. from Massachusetts Institute of Technology in Media Arts in the year 1997.
Wearables got catapulted as the new millennium saw many of the tech giants investing their resources for its development. Wearables started getting tailor made as per consumer requirements, integrating the gadgets in users’ clothing and accessories. The hottest trend is surely the wrist-mounted devices, whether we talk about the smartwatches, the fitness bands or a combination of both. These devices can perform very basic operations of a smartphone, along with monitoring the health conditions such as blood pressure and glucose level.
However it’s not all about wrists that the manufacturers are interested in. There's already a host of photo-snapping life-loggers available which can take pictures throughout your day and keep a log of your movements to build up a searchable and sharable photographic memory of your day, devices which can capture special moments spontaneously. In fact wearables can even analyze your music preferences and play suitable music as per your state of mind. The much hyped Google Glass, glasses paired with optical head mounted display, is able to easily respond to verbal commands, augmented by the occasional manual interaction via controls located directly on the frame. There has even been talk about eventually including a laser-projected virtual keyboard for times when voice command just isn’t enough. And with the ability to access countless sources of information in seconds and then relay them to a miniature screen situated in the upper corner of the wearer’s vision field, Google Glass sets itself apart from other emerging technologies. The company even aims to add all these functionalities in a contact lens, saving the company from flak it received for the outrageously awkward design of initial version.
While we might be persuaded to wear contact lenses, implanted technology will likely be the preserve of medical applications, at least for the foreseeable future. Underskin chips might not be the most comfortable thing a person would like to have. Our clothes, however, are fair game for a wearable revolution. As printed circuits and chips get smaller, the time is ripe for techy clothing to expand beyond capacitive gloves or headphone-hats. Clothing is the tomorrow of wearables.
Back in 2008, Georgia Institute of Technology reported to have developed miniature power plant integrated with the clothes. These electricity generating wires create a charge when stretched and released, and if woven into a pair of trousers, could generate enough electricity to charge wearable sensors or even a smartphone. Wearable clothing provides an incredible opportunity not just for fitness related products, but for retailers as well through monitoring of local environment. Electronically wired clothing could recognize compatible nearby outlets of the same brand, direct the wearer to certain parts of a store, or light up in response to offers in-store.
Most of the focus of the wearables in the past has been around fitness related products, a genre where wearables perform far better than their smartphone counterparts. However the current genre of wearables have failed to please as far as fashion pundits are concerned, a flaw which has led to an increased demand of invisible wearables. Adidas is the closest to realize the vision with Adidas miCoach tank vest, which includes a heart rate sensor in the inner support bra. As integrated circuits diminish in size and improve in performance, athletes can transmit real time data to their personal trainers and coach, and also for statistics for TV commentary and media.
Even with the far stretched scope, wearables have not quite been able to live upto the expectations. Study shows that 80% of the users drop out of using wearables by 6 months. The largest selling smartwatch, the Pebble Smartwatch has had a sale of 1 million units so far in two years. To put things into perspective, Android phones were highly criticized on their launch but only took 6 months to reach that figure. The highly anticipated Google Glass project had to sack its current version following allegations of breach of privacy and not so attractive design. While it is premature to predict specific features or form factors that will prevail in the future, wearable tech presents a fascinating field to study. Never before has computing been small enough to be worn relatively comfortably around the clock on the body, presenting opportunities for breakthrough medical advancements and daily life monitoring. With innovations on the horizon, it all rests on human creativity what they can achieve through this pathbreaking technology.
Fascination has taken mankind on a journey of hope, discovery, innovation and transcendence. This journey, the compilation of ephemeral steps of individual lives, has seen the growth of humans from animals to the magicians who move mountains by a flick of their finger.
The winds beneath our wings - our fascination has made us fly. Many men looked up in the sky, envied the freedom of flying birds. Only one day, someone was so frustrated by this ostensible inability, that the next day humans could fly.
But the real power is not in doing the task, but in having the task done. The fascination, the power and the opportunities took the shape of our flying minions we call Drones.
Drones, a term coined much later to their discovery, are pilotless aerial vehicle, the history of which dates back to mid 1800s when Austrians would attack their enemies using explosives-filled hot air balloons. The idea of birth of drones in combat use aligns quite appositely with our current perception of the technology. But it was only once Wright Brothers gave wings to human dreams of flying at the onset of 20th century that the future of drones as we know today was truly born.
The concept of combat changed with the advent of armed manned aeroplanes. Skies were the new frontier of wreaking havoc and the immediate response was the creation of anti-aircraft units - ground-based gunners to shoot down in-flight airplanes. And somewhere in all that noise, drones droned for the first time. Not nearly as we know them today, but quite the dumb unreliable machines that could very crudely simulate manned planes.
Anatomically, the basic machinery of a manned aeroplane took the shape of a drone when fitted with a radio-controlled gyroscope instead of human-pilot. It gave operational capabilities - controlling the roll, pitch and yaw, to a ground operator who would fly the plane in the range-of-sight.
Development of UAVs progressed from being model enemies to assault weapons in the years following the First World War and during the Second World War. The year of 1936 saw the advent of unmanned aircrafts that could be controlled out of sight and whose flight details were monitored from ground terminal based on a radio feedback. Around the same time the now ubiquitous term drone first came into being, derived from the British aircraft DH.82 Queen Bee.
Success in major part of target drones and assault drones, led to their use for other missions like reconnaissance and surveillance during 1980s. In short, this time was a precursor to drones’ use in ISTAR (Intelligence, Surveillance, Target Acquisition and Reconnaissance) today.
The later half of 20th century witnessed another iconic development which later played a key role in making drones more effective - the birth of satellite communications. The dream of satellite-controlled drones soon became a reality with Israel being the first country to employ them in combat in its war against Syria during early 1980s. Drones were no longer needed to be pampered. As an instance, today a drone flying over Afghanistan is controlled by two crew members sitting in a trailer ten thousand miles away in the deserts of Las Vegas, Nevada.
The helicopters are a great inspiration for today’s drones. Quadcopters are today’s ‘IT’ machine. Recreational UAVs is a trend that has spiked considerably in the last 5 years. They mostly come in the shape of a rotorcraft. The size vary from very small - nano-drones which could fit in the palm of your hand like the Blade Nano QX RTF drone to micro-drones to much larger professional-drones like the DJI SpreadWings S1000. Their ability to vertically take-off, hover at a point in space and move adeptly in restricted airspace gives them a natural advantage over fixed-wing aircrafts.
But it is not the mechanical breakthroughs that lend drones to be the forthcoming wonder. It’s the science of computers. Computers have already, though latently, played a part in recent surge in drones’ popularity. The small chip mounted on most of today’s drones reads and studies itself, and based on the environmental feedback, keeps itself afloat. This includes managing its roll, pitch and yaw while, most often, the acceleration is left for the sentient operator to decide.
Up in the air, they are given basic survival capability - to stay afloat without guidance. They listen to us and oblige on our commands. We have mounted cameras on them, allowing us to take beautiful pictures, getting to locations not readily humanly possible. This further allowed the drones’ employment for surveying purposes in archaeology, for guided pesticide treatment in agriculture; for property scouting in real estate; for scouting for victims in disaster management; border patrolling and many other such.
Drones equipped with IR transceivers are flown over a field of crops, the data relayed back is used to judge the health of crops. Scientists in the US have flown drones over whales to collect the data from the snot sneezed by the mammal based on which to judge its health and age. Sometimes the receiver making sense of what drones see and hear is a machine while at others it’s one of us humans. With our eyes and ears in air, we have become the Mister Fantastic of the lore.
The present is a turning point in the life of drone technology. Masses still believe that drones are something the US used for hunting down Al-Qaeda and Osama; Something that has forced Taliban to reconsider its brutal takeover of Afghanistan. Or maybe something to buy for their kids when they turn 14. But we are warming up to drones. The recent news don’t just talk about UK having the most advanced drone in their military fleet. We see some or the other lawsuit being filed against the regulations imposed by the governments of USA, UK, Australia, New Zealand on the use of consumer drones.
The struggle in development of machines has this unmistakable characteristic where we want to get rid of the human element guiding them. Drones have been no different.The efforts today are focused to achieve autonomousity in the flight of drones. Or in simpler words to get rid of the human operator. We are trying to put as much intelligence on these machines that would allow them to have a safe flight. All it will take is a set of instructions and the next thing we know we will be getting the data we need.
But the data is not good enough for us because it is an incomplete step of understanding. The concept of abstraction demands to put not just data reading capabilities but further data interpretation capabilities on these systems.
The oblivious consumer would ask its pet drone to go check on the health of his crops. The obedient drone would know the location of the farm, the parameters to identify with a healthy quality of crops. It will see, it will evaluate, it will judge and it will know. A flyby moment away, what the unknowing user gets in return is an objective answer, a Yes or a No. The human element is gotten rid of, in entirety.
Although simple enough to be true, the dream is yet to be a reality. For the technology is yet to grow and expand enough to ensure pervasive presence of necessities of such intelligence, like GPS, like internet. In addition, there is the quality of flight systems on board, we don’t have machines smart enough to understand the environment around us and its inconsistencies. Artificial Intelligence is growing but it still is in its infancy to take on challenges of such magnitude. When we talk of an autonomous drone, we talk of a fully intelligent machine with wings, however that intelligence is yet to see the light of this day.
But that is not the primal concern, what sits firmly at the gates of this overbrimming dam is a bunch of socio-technical issues. Issues justified enough to not be taken lightly. Issues that have prompted aviation authorities of various developed countries to put regulations on drones’ use for commercial purposes. Reasons vary from privacy concerns, to terrorist threats, to accidental hazards. If drones become smart enough to fly on themselves, the question would be what’s stopping a maniac to put an explosive on it and crash it into a building? What’s stopping a stalker to pry on someone’s property?
A change without oppression isn’t really an impactful revolution. The detractors will be galore, and like those countless times when an apparent demise preceded the onset of change, drones will be shot down. But an army of drones is lurking at that farther horizon called future, and a dozen men with shotguns or a couple more of lawsuits will be too less to tame them. They have come, they are yet to open their eyes, but for the infallible chronology, conquer, they will.
Virtual Reality is simply an illusory environment, engineered to give users the impression of being somewhere other than where they are. As you sit safely in your home, virtual reality can transport you to a football game, a rock concert, a submarine exploring the depths of the ocean, or a space station orbiting Jupiter. It allows the user to ride a camel around the Great Pyramids, fly jets, or perform brain surgery.
True virtual reality does more than merely depict scenes of such activities — it creates an illusion of actually being there. Piloting a Boeing 777 with a laptop flight simulator, after all, does not really convey a sense of zooming across the continent 5 miles above the surface of a planet. Virtual reality, though, attempts to re-create the actual experience, combining vision, sound, touch, and feelings of motion engineered to give the brain a realistic set of sensations.
“The last step was moving from a command line interface to the visual interface. Maybe the next one was when you might be totally immersed in the world.” Howard Rheingold, Journalist and author of “Virtual Reality, one of the definitive historical accounts of VR”
Some people identify the birth of virtual reality in rudimentary Victorian “stereoscopes,” the first 3D picture viewers. Others might point to any sort of out-of-body experience. But to most, VR as we know it was created by a handful of pioneers in the 1950s and 1960s. In 1962, after years of work, filmmaker Mort Heilig patented what might be the first true VR system: the Sensorama, an arcade-style cabinet with a 3D display, vibrating seat, and scent producer. Heilig imagined it as one in a line of products for the “cinema of the future,” but that future failed to materialize in his lifetime. In 1965, Ivan Sutherland — already known as the creator of groundbreaking computer interface Sketchpad — conceived of what he termed “The Ultimate Display,” or, as he wrote, “a room within which the computer can control the existence of matter.” He demonstrated an extremely preliminary iteration of such a device, a periscope-like video headset called the “Sword of Damocles,” in 1968.
Meanwhile, at the Wright–Patterson Air Force Base in Ohio, military engineer Thomas Furness was designing a new generation of flight simulators, working on a multi-decade project that eventually became the hallmark program known as the Super Cockpit.
A few years later, in the late ’60s, an artist and programmer named Myron Krueger would begin creating a new kind of experience he termed “artificial reality,” and attempt to revolutionize how humans interacted with machines. Throughout the late ’90s and early 2000s, virtual reality companies continued to operate, but with a lower, more pragmatic profile. The military became the biggest advocate for VR’s utility. 3D graphics continued to advance, but referring to them as “virtual reality” became increasingly rare. Companies periodically showcased virtual reality systems and peripherals, but despite protests from Lanier and others, the “death of VR” had become a standard narrative.
Then, in 2012, Palmer Luckey revealed a $300 virtual reality headset called the Oculus Rift. While the Rift became a symbol of VR’s resurgence, the groundwork had been laid years before. Luckey had worked with researchers like Skip Rizzo, who used VR to treat cognitive and motor rehabilitation — including post-traumatic stress disorder — and Mark Bolas, who had moved to the University of Southern California’s interactive media program. Improvements in computing power and display technology, meanwhile, had solved some of the problems that had proved intractable in the 1990s.
There are technical reasons why the VR bubble burst once. Even today, while there are a lot of promising developments and prospects, the technical challenges have not all been dealt with. Display Resolution is an immediately obvious issue. 1K x 1K resolution is likely the best and affordable resolution Head Mounted Display(HMD) will be able to support in the next year. Divided by a 100 degree viewing angle, we get a display with less than 1/50th the pixel density of a phone at a normal viewing distance. It will take a few years before HMDs with higher resolution panels can be developed, but it is primarily a cost issue.
Tracking - is the determination of head position and orientation in the real world, known as pose. If the virtual images are not exactly in the right place relative to the head position in the real world, the illusion breaks down. The HMD needs to track both the translation as well as the rotation and what the user views must adapt to any change. Latency is another major issue that haunts VR kit developers. For the ideal Virtual Reality experience, we need tracking, rendering, transmitting to the display, getting photons coming out of the display and getting photons to stop coming out to all happen in under 20 milliseconds. Since a single 60Hz frame is 16.6 milliseconds and latency is a typical game is about 35 milliseconds, it will be challenging to get to 20 milliseconds or less.
The most complex issue though, is to produce perceptions indistinguishable from reality. Pixels update only once a frame, and each sub pixel - red, blue and green - activate in sequence rather than simultaneously. Moreover, pixels have persistence, they do not switch from zero to full brightness back to zero. This causes colour fringes and even judder. A refresh rate of 2000Hz could, theoretically, tackle the issue of persistence but current displays max out at 120Hz - which leaves a tremendous scope of development, improvement and research.
Finally there remains the issue of motion sickness. This will remain a problem for a lot of people using VR, but research shows that people eventually get used to it, much like they get their sea legs.
What is real? How do you define real? If you're talking about what you can hear, what you can smell, taste and feel, then real is simply electrical signals interpreted by your brain. Morpheus (The Matrix)
Everything that we experience in our life, all that our senses can perceive can be reduced to electrical activity stimulating our brains as our sensory organs deliver information about the outside world. The brain is real - if we can fool our brain, we can create an alternate reality for us. All our perceptions are interpretation of what our senses can respond to.
Words have never been able to relay intentions perfectly. VR holds the promise of allowing us to literally show one another what we mean rather than expressing it through crude verbal approximations.
Physical reality has certain limitations which virtual reality will be able to circumvent. For instance, moving from one location to another in the real world requires moving our physical body, which is constrained by the laws of physics. The human body is a very delicate thing, subject to damage extremely easily from many influences. Transportation can also take up a lot of time. What if these dangers and constraints could be entirely eliminated?
VR is the projection of artificial stimuli upon the senses in order to create the interpretation that we are in a different location in space-time. We could, very well, be able to visit places that do not really exist. Imagine what it would be like to explore the mountains of Mordor, or to take a walk through Winterfell, or to meet Yoda! The prospects are countless and awe-inspiring.
Imagine 10 years ago trying to envision the way we use cellphones today. It’s impossible. That’s the promise VR has today. VR at its best shouldn’t replace real life, just modify it, giving us access to so much just out of reach physically, economically. If you can dream it, VR can make it. It’s a medium for progress, not the progress itself.
Any sufficiently advanced technology is indistinguishable from magic. Arthur C. Clarke
Where machines excel is not in entertaining humans but in assisting them wake up to lesser troubles everyday. We see such a future, full of infinite possibilities, where ages of knowledge is churned in and out daily making sense, making use of.
A bacteria inside the body will be detected by an implanted wearable way before any suffering ensues, the pet drone sitting alongside would be notified to get the medicines and monitor its injection, which it would do most obediently, all which while the human is still dodging the laser blasts, kicking some stormtroopers and enjoying the last installment of the prequel trilogy to the sequel trilogy that lies somewhere between original trilogy and the sequel trilogy of the never ending space saga the world so belovedly calls Star Wars on his 5th generation Oculus Rift Virtual Reality headset.
But until that day comes, hone we on the tool!