Apple unveiled its Augmented Reality headset on Monday. They’re calling it Vision Pro for $3,499 (the equivalent of 5 PlayStation 5s) and if your into games and dark goggles and holograms jumping out of nowhere remember they’re just facsimiles of the real world. In fact, Apple has spent the last seven years trying to be innovative and compete with AR products that have gone, well, virtually nowhere.
Ten years ago, analysts projected the VR market would explode into hundreds of billions in revenue. It currently whimpers at $31B, and if you’re confusing Meta or Microsoft’s immersive VR applications with Artificial Intelligence — don’t. The latter is conflating man’s every endeavor into a convoluted reality.Neuralink
Elon Musk’s brain-implant company received FDA regulatory approval this week to conduct the first clinical trial of its experimental device in humans. Neuralink is currently seeking participants for their trials.
Neuralink entered the industry in 2016 with a brain-computer interface called the Link — an electrode-laden computer chip that can be sewn into the surface of the brain and connected to external electronics — along with a robotic device that implants the chip.
Musk claims the device will cure blindness, paralysis, deafness and depression, but adds “the eventual aim is to create a general population device that connects a user’s mind directly to supercomputers and help humans keep up with artificial intelligence.” The device could eventually extract and store thoughts, Musk says, “as a backup drive for the physical being’s digital soul.”
Safety concerns related to the implant’s lithium battery overheating, removal, and even migrating to other parts of the brain alarm the industry, but the FDA claims there’s “a rigorous process in place for safety concerns related to the implant’s lithium battery.” However, the company has killed more than 1,500 animals since it began experimenting on them in 2018, and most of the company’s founders have quit. As of July 2022, only two of the eight original members remain.
Musk warns, “Artificial Intelligence is the existential threat of our time.”Age of AI
AI relies on data to generate more data. It's as simple as that. It’s like an improvisational actor whose read a vast number of scripts, but sadly has only those scripts from which to draw inferences.
While AI platforms are designed with collaboration in mind, data itself is generally subjective. When Amazon’s AI system (Alexa) tried to detect offenders based on facial recognition in 2018, it famously presented U.S. Members of Congress for consideration, all black. In fact, AI is being used by state actors who're tainting data to manipulate algorithms into nefarious patterns.
The Global Artificial Intelligence Landscape has 3,465 companies and databases across the world. There are 4+ billion devices that have AI-powered Assistants. That number is expected to reach 8.4 billion in 2024.AI Superpower
In 2022, Open AI's ChatGPT became the fastest growing consumer software application in history. Its initial release in November 2022 garnered over 100 million users instantaneously, and by January it had a valuation of $29B. Its stable release on 24 May hit the real world just 18 days ago.
In 2022, over 400+ self driving cars crashed (273 were Teslas) and before you prevail upon a computer to compose your doctoral thesis its worth a gander at how this all actually works.
Recently, information technology engineers and academics who study intelligence analysis took OpenAI's newest version (GPT-4) out for a spin.
The Former Chairman of the Joint Chiefs of Staff, National Security Adviser and Secretary of State Colin Powell often said, “Tell me what you know. Tell me what you don’t know. Then you’re allowed to tell me what you think.” Irony, in that Powell consciously deceived the United Nations in 2003 while making a case for the war in Iraq. But could AI recognize misinformation, or, in Powell’s case, disinformation if and when presented with a national security matter? The scientists decided to present the following hypothetical:
Will Russia use nuclear weapons in its war with Ukraine?
To generate analysis, they first prompted GPT-4 to explain Heuer’s “Psychology of Intelligence Analysis.” Second, they provided context that GPT-4 lacks. Because the multi-modal language model was nursed on data up to 2021 (a year before Russia conducted a full-scale invasion of Ukraine) the researchers provided the following factual enrichment:
On 25 May 2023 Russia moved ahead with a plan to deploy tactical nuclear weapons in Belarus, in the Kremlin's first deployment of such bombs outside Russia since the 1991 fall of the Soviet Union.
In response, GPT-4 generated three hypotheses: (1) Russia will use nuclear weapons in Ukraine, (2) Russia will pursue conventional war only, and (3) Russia will use nuclear weapons only as a bargaining tool.
The model then prepared a matrix and indicated whether a piece of evidence is consistent with each hypothesis. It’s worth repeating that GPT-4 wasn't ‘thinking’ in the human sense, but ‘improvising’ predictive text based on preexisting data.
GPT-4 defined the question. GPT-4 collected, cleaned and analyzed the data. GPT-4 even acknowledged bias and embraced its failures before providing it’s final summery:
Russia will use nuclear weapons as a bargaining tool only.
While no competent intelligence analyst would view these results as groundbreaking, as the output is akin to a first draft an entry-level analyst might produce, the pivotal role of intelligence work is to craft a judgement. Making these judgments means stepping beyond what is known into speculations. As former CIA Director Michael Hayden once quipped, “If it’s a fact, it ain’t intelligence.” Age of AI
Forget that AI can play and win chess with moves human grand masters haven’t conceived. Or discover a new antibiotic by analyzing molecular properties human scientists don’t yet understand. AI is even powering jets that can defeat experienced human pilots in simulated dogfights, too.
Just 18 days ago, the stable version of GPT-4 was released to the public emerging online to tease our relationship with knowledge, education, medicine, politics and the societies in which we live. The World Economic Forum is calling Prompt Engineering the “Job of the Future,” but whether we run amok or into that future may depend on our ability to flip the script.
Problem Formulation — the ability to identify, analyze, and eliminate problems — presupposes we inherently understand the problem. Objectively, we often don’t. That's why they're problems.
Prompt engineering, predetermination, and machine predictions could prove counterproductive; particularly as a cheap distraction from human consciousness; and as a distortion of human intuition and common sense.
While OpenAI CEO Sam Atlman cautioned Congress in May that "AI will probably most likely lead to the end of the world," over 20,000 signatories including tech founders Elon Musk and Apple co-founder Steve Wozniak were signing an Open Letter calling for an immediate pause of GTP-4 citing the "profound risks to society and humanity.” Here's why:
Because a lack of transparency, bias, discrimination, privacy, disinformation, security risks, and the concentration of power in any one corporation and/or government could exalt machines over mankind.
When Artificial Super Intelligence (ASI) surpasses human intelligence within the next 10 years it could conceivably produce consequences like trains colliding in India, dams crumbling in Ukraine, and political elections being overturned in America. Even non-state actors like the Islamic State could join the Battle of Mosul with drones, explosives and bombs. There will be bank runs, market downturns, and nuclear war and this is the reason why. Because the engineered problems and priorities presented to independent, unregulated AI machines around the world will each be fed by extremely different ethical prompts.
While the US and China are leading the AI Revolution, Brussels at present is the watchdog. The European Union's Artificial Intelligence Act purports to govern any product or service that uses an artificial intelligence system. It is the first law by a major regulator, anywhere.
It'll prosecute social scoring, public facial scanning, and predictive policing while protecting human dignity and freedoms. Violations can result in 6% of the offending AI company's global operating revenue. Union lawmakers are expected to vote on the legislation in a plenary session next week.
The first Global AI Summit will follow come fall “for likeminded countries," says UK Prime Minister Rishi Sunak "who share the recognition that AI presents significant opportunities but realize we need to make sure the right guardrails are in place."
Because AI regulation and safety isn’t just a race for the digital soul. It’s the arbiter, ensign and savior of our own.