Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

NIST Announces Post-Quantum Cryptography Standards



Today, almost all data on the Internet, including bank transactions, medical records, and secure chats, is protected with an encryption scheme called RSA (named after its creators Rivest, Shamir, and Adleman). This scheme is based on a simple fact—it is virtually impossible to calculate the prime factors of a large number in a reasonable amount of time, even on the world’s most powerful supercomputer. Unfortunately, large quantum computers, if and when they are built, would find this task a breeze, thus undermining the security of the entire Internet.

Luckily, quantum computers are only better than classical ones at a select class of problems, and there are plenty of encryption schemes where quantum computers don’t offer any advantage. Today, the U.S. National Institute of Standards and Technology (NIST) announced the standardization of three post-quantum cryptography encryption schemes. With these standards in hand, NIST is encouraging computer system administrators to begin transitioning to post-quantum security as soon as possible.

“Now our task is to replace the protocol in every device, which is not an easy task.” —Lily Chen, NIST

These standards are likely to be a big element of the Internet’s future. NIST’s previous cryptography standards, developed in the 1970s, are used in almost all devices, including Internet routers, phones, and laptops, says Lily Chen, head of the cryptography group at NIST who lead the standardization process. But adoption will not happen overnight.

“Today, public key cryptography is used everywhere in every device,” Chen says. “Now our task is to replace the protocol in every device, which is not an easy task.”

Why we need post-quantum cryptography now

Most experts believe large-scale quantum computers won’t be built for at least another decade. So why is NIST worried about this now? There are two main reasons.

First, many devices that use RSA security, like cars and some IoT devices, are expected to remain in use for at least a decade. So they need to be equipped with quantum-safe cryptography before they are released into the field.

“For us, it’s not an option to just wait and see what happens. We want to be ready and implement solutions as soon as possible.” —Richard Marty, LGT Financial Services

Second, a nefarious individual could potentially download and store encrypted data today, and decrypt it once a large enough quantum computer comes online. This concept is called “harvest now, decrypt later“ and by its nature, it poses a threat to sensitive data now, even if that data can only be cracked in the future.

Security experts in various industries are starting to take the threat of quantum computers seriously, says Joost Renes, principal security architect and cryptographer at NXP Semiconductors. “Back in 2017, 2018, people would ask ‘What’s a quantum computer?’” Renes says. “Now, they’re asking ‘When will the PQC standards come out and which one should we implement?’”

Richard Marty, chief technology officer at LGT Financial Services, agrees. “For us, it’s not an option to just wait and see what happens. We want to be ready and implement solutions as soon as possible, to avoid harvest now and decrypt later.”

NIST’s competition for the best quantum-safe algorithm

NIST announced a public competition for the best PQC algorithm back in 2016. They received a whopping 82 submissions from teams in 25 different countries. Since then, NIST has gone through 4 elimination rounds, finally whittling the pool down to four algorithms in 2022.

This lengthy process was a community-wide effort, with NIST taking input from the cryptographic research community, industry, and government stakeholders. “Industry has provided very valuable feedback,” says NIST’s Chen.

These four winning algorithms had intense-sounding names: CRYSTALS-Kyber, CRYSTALS-Dilithium, Sphincs+, and FALCON. Sadly, the names did not survive standardization: The algorithms are now known as Federal Information Processing Standard (FIPS) 203 through 206. FIPS 203, 204, and 205 are the focus of today’s announcement from NIST. FIPS 206, the algorithm previously known as FALCON, is expected to be standardized in late 2024.

The algorithms fall into two categories: general encryption, used to protect information transferred via a public network, and digital signature, used to authenticate individuals. Digital signatures are essential for preventing malware attacks, says Chen.

Every cryptography protocol is based on a math problem that’s hard to solve but easy to check once you have the correct answer. For RSA, it’s factoring large numbers into two primes—it’s hard to figure out what those two primes are (for a classical computer), but once you have one it’s straightforward to divide and get the other.

“We have a few instances of [PQC], but for a full transition, I couldn’t give you a number, but there’s a lot to do.” —Richard Marty, LGT Financial Services

Two out of the three schemes already standardized by NIST, FIPS 203 and FIPS 204 (as well as the upcoming FIPS 206), are based on another hard problem, called lattice cryptography. Lattice cryptography rests on the tricky problem of finding the lowest common multiple among a set of numbers. Usually, this is implemented in many dimensions, or on a lattice, where the least common multiple is a vector.

The third standardized scheme, FIPS 205, is based on hash functions—in other words, converting a message to an encrypted string that’s difficult to reverse

The standards include the encryption algorithms’ computer code, instructions for how to implement it, and intended uses. There are three levels of security for each protocol, designed to future-proof the standards in case some weaknesses or vulnerabilities are found in the algorithms.

Lattice cryptography survives alarms over vulnerabilities

Earlier this year, a pre-print published to the arXiv alarmed the PQC community. The paper, authored by Yilei Chen of Tsinghua University in Beijing, claimed to show that lattice-based cryptography, the basis of two out of the three NIST protocols, was not, in fact, immune to quantum attacks. On further inspection, Yilei Chen’s argument turned out to have a flaw—and lattice cryptography is still believed to be secure against quantum attacks.

On the one hand, this incident highlights the central problem at the heart of all cryptography schemes: There is no proof that any of the math problems the schemes are based on are actually “hard.” The only proof, even for the standard RSA algorithms, is that people have been trying to break the encryption for a long time, and have all failed. Since post-quantum cryptography standards, including lattice cryptography, are newer, there is less certainty that no one will find a way to break them.

That said, the failure of this latest attempt only builds on the algorithm’s credibility. The flaw in the paper’s argument was discovered within a week, signaling that there is an active community of experts working on this problem. “The result of that paper is not valid, that means the pedigree of the lattice-based cryptography is still secure,” says NIST’s Lily Chen (no relation to Tsinghua University’s Yilei Chen). “People have tried hard to break this algorithm. A lot of people are trying, they try very hard, and this actually gives us confidence.”

NIST’s announcement is exciting, but the work of transitioning all devices to the new standards has only just begun. It is going to take time, and money, to fully protect the world from the threat of future quantum computers.

“We’ve spent 18 months on the transition and spent about half a million dollars on it,” says Marty of LGT Financial Services. “We have a few instances of [PQC], but for a full transition, I couldn’t give you a number, but there’s a lot to do.”

The Saga of AD-X2, the Battery Additive That Roiled the NBS



Senate hearings, a post office ban, the resignation of the director of the National Bureau of Standards, and his reinstatement after more than 400 scientists threatened to resign. Who knew a little box of salt could stir up such drama?

What was AD-X2?

It all started in 1947 when a bulldozer operator with a 6th grade education, Jess M. Ritchie, teamed up with UC Berkeley chemistry professor Merle Randall to promote AD-X2, an additive to extend the life of lead-acid batteries. The problem of these rechargeable batteries’ dwindling capacity was well known. If AD-X2 worked as advertised, millions of car owners would save money.

Black and white photo of a man in a suit holding an object in his hands and talking. Jess M. Ritchie demonstrates his AD-X2 battery additive before the Senate Select Committee on Small Business.National Institute of Standards and Technology Digital Collections

A basic lead-acid battery has two electrodes, one of lead and the other of lead dioxide, immersed in dilute sulfuric acid. When power is drawn from the battery, the chemical reaction splits the acid molecules, and lead sulfate is deposited in the solution. When the battery is charged, the chemical process reverses, returning the electrodes to their original state—almost. Each time the cell is discharged, the lead sulfate “hardens” and less of it can dissolve in the sulfuric acid. Over time, it flakes off, and the battery loses capacity until it’s dead.

By the 1930s, so many companies had come up with battery additives that the U.S. National Bureau of Standards stepped in. Its lab tests revealed that most were variations of salt mixtures, such as sodium and magnesium sulfates. Although the additives might help the battery charge faster, they didn’t extend battery life. In May 1931, NBS (now the National Institute of Standards and Technology, or NIST) summarized its findings in Letter Circular No. 302: “No case has been found in which this fundamental reaction is materially altered by the use of these battery compounds and solutions.”

Of course, innovation never stops. Entrepreneurs kept bringing new battery additives to market, and the NBS kept testing them and finding them ineffective.

Do battery additives work?

After World War II, the National Better Business Bureau decided to update its own publication on battery additives, “Battery Compounds and Solutions.” The publication included a March 1949 letter from NBS director Edward Condon, reiterating the NBS position on additives. Prior to heading NBS, Condon, a physicist, had been associate director of research at Westinghouse Electric in Pittsburgh and a consultant to the National Defense Research Committee. He helped set up MIT’s Radiation Laboratory, and he was also briefly part of the Manhattan Project. Needless to say, Condon was familiar with standard practices for research and testing.

Meanwhile, Ritchie claimed that AD-X2’s secret formula set it apart from the hundreds of other additives on the market. He convinced his senator, William Knowland, a Republican from Oakland, Calif., to write to NBS and request that AD-X2 be tested. NBS declined, not out of any prejudice or ill will, but because it tested products only at the request of other government agencies. The bureau also had a longstanding policy of not naming the brands it tested and not allowing its findings to be used in advertisements.

Photo of a product box with directions printed on it. AD-X2 consisted mainly of Epsom salt and Glauber’s salt.National Institute of Standards and Technology Digital Collections

Ritchie cried foul, claiming that NBS was keeping new businesses from entering the marketplace. Merle Randall launched an aggressive correspondence with Condon and George W. Vinal, chief of NBS’s electrochemistry section, extolling AD-X2 and the testimonials of many users. In its responses, NBS patiently pointed out the difference between anecdotal evidence and rigorous lab testing.

Enter the Federal Trade Commission. The FTC had received a complaint from the National Better Business Bureau, which suspected that Pioneers, Inc.—Randall and Ritchie’s distribution company—was making false advertising claims. On 22 March 1950, the FTC formally asked NBS to test AD-X2.

By then, NBS had already extensively tested the additive. A chemical analysis revealed that it was 46.6 percent magnesium sulfate (Epsom salt) and 49.2 percent sodium sulfate (Glauber’s salt, a horse laxative) with the remainder being water of hydration (water that’s been chemically treated to form a hydrate). That is, AD-X2 was similar in composition to every other additive on the market. But, because of its policy of not disclosing which brands it tests, NBS didn’t immediately announce what it had learned.

The David and Goliath of battery additives

NBS then did something unusual: It agreed to ignore its own policy and let the National Better Business Bureau include the results of its AD-X2 tests in a public statement, which was published in August 1950. The NBBB allowed Pioneers to include a dissenting comment: “These tests were not run in accordance with our specification and therefore did not indicate the value to be derived from our product.”

Far from being cowed by the NBBB’s statement, Ritchie was energized, and his story was taken up by the mainstream media. Newsweek’s coverage pitted an up-from-your-bootstraps David against an overreaching governmental Goliath. Trade publications, such as Western Construction News and Batteryman, also published flattering stories about Pioneers. AD-X2 sales soared.

Then, in January 1951, NBS released its updated pamphlet on battery additives, Circular 504. Once again, tests by the NBS found no difference in performance between batteries treated with additives and the untreated control group. The Government Printing Office sold the circular for 15 cents, and it was one of NBS’s most popular publications. AD-X2 sales plummeted.

Ritchie needed a new arena in which to challenge NBS. He turned to politics. He called on all of his distributors to write to their senators. Between July and December 1951, 28 U.S. senators and one U.S. representative wrote to NBS on behalf of Pioneers.

Condon was losing his ability to effectively represent the Bureau. Although the Senate had confirmed Condon’s nomination as director without opposition in 1945, he was under investigation by the House Committee on Un-American Activities for several years. FBI Director J. Edgar Hoover suspected Condon to be a Soviet spy. (To be fair, Hoover suspected the same of many people.) Condon was repeatedly cleared and had the public backing of many prominent scientists.

But Condon felt the investigations were becoming too much of a distraction, and so he resigned on 10 August 1951. Allen V. Astin became acting director, and then permanent director the following year. And he inherited the AD-X2 mess.

Astin had been with NBS since 1930. Originally working in the electronics division, he developed radio telemetry techniques, and he designed instruments to study dielectric materials and measurements. During World War II, he shifted to military R&D, most notably development of the proximity fuse, which detonates an explosive device as it approaches a target. I don’t think that work prepared him for the political bombs that Ritchie and his supporters kept lobbing at him.

Mr. Ritchie almost goes to Washington

On 6 September 1951, another government agency entered the fray. C.C. Garner, chief inspector of the U.S. Post Office Department, wrote to Astin requesting yet another test of AD-X2. NBS dutifully submitted a report that the additive had “no beneficial effects on the performance of lead acid batteries.” The post office then charged Pioneers with mail fraud, and Ritchie was ordered to appear at a hearing in Washington, D.C., on 6 April 1952. More tests were ordered, and the hearing was delayed for months.

Back in March 1950, Ritchie had lost his biggest champion when Merle Randall died. In preparation for the hearing, Ritchie hired another scientist: Keith J. Laidler, an assistant professor of chemistry at the Catholic University of America. Laidler wrote a critique of Circular 504, questioning NBS’s objectivity and testing protocols.

Ritchie also got Harold Weber, a professor of chemical engineering at MIT, to agree to test AD-X2 and to work as an unpaid consultant to the Senate Select Committee on Small Business.

Life was about to get more complicated for Astin and NBS.

Why did the NBS Director resign?

Trying to put an end to the Pioneers affair, Astin agreed in the spring of 1952 that NBS would conduct a public test of AD-X2 according to terms set by Ritchie. Once again, the bureau concluded that the battery additive had no beneficial effect.

However, NBS deviated slightly from the agreed-upon parameters for the test. Although the bureau had a good scientific reason for the minor change, Ritchie had a predictably overblown reaction—NBS cheated!

Then, on 18 December 1952, the Senate Select Committee on Small Business—for which Ritchie’s ally Harold Weber was consulting—issued a press release summarizing the results from the MIT tests: AD-X2 worked! The results “demonstrate beyond a reasonable doubt that this material is in fact valuable, and give complete support to the claims of the manufacturer.” NBS was “simply psychologically incapable of giving Battery AD-X2 a fair trial.”

Black and white photo of a man standing next to a row of lead-acid batteries. The National Bureau of Standards’ regular tests of battery additives found that the products did not work as claimed.National Institute of Standards and Technology Digital Collections

But the press release distorted the MIT results.The MIT tests had focused on diluted solutions and slow charging rates, not the normal use conditions for automobiles, and even then AD-X2’s impact was marginal. Once NBS scientists got their hands on the report, they identified the flaws in the testing.

How did the AD-X2 controversy end?

The post office finally got around to holding its mail fraud hearing in the fall of 1952. Ritchie failed to attend in person and didn’t realize his reports would not be read into the record without him, which meant the hearing was decidedly one-sided in favor of NBS. On 27 February 1953, the Post Office Department issued a mail fraud alert. All of Pioneers’ mail would be stopped and returned to sender stamped “fraudulent.” If this charge stuck, Ritchie’s business would crumble.

But something else happened during the fall of 1952: Dwight D. Eisenhower, running on a pro-business platform, was elected U.S. president in a landslide.

Ritchie found a sympathetic ear in Eisenhower’s newly appointed Secretary of Commerce Sinclair Weeks, who acted decisively. The mail fraud alert had been issued on a Friday. Over the weekend, Weeks had a letter hand-delivered to Postmaster General Arthur Summerfield, another Eisenhower appointee. By Monday, the fraud alert had been suspended.

What’s more, Weeks found that Astin was “not sufficiently objective” and lacked a “business point of view,” and so he asked for Astin’s resignation on 24 March 1953. Astin complied. Perhaps Weeks thought this would be a mundane dismissal, just one of the thousands of political appointments that change hands with every new administration. That was not the case.

More than 400 NBS scientists—over 10 percent of the bureau’s technical staff— threatened to resign in protest. The American Academy for the Advancement of Science also backed Astin and NBS. In an editorial published in Science, the AAAS called the battery additive controversy itself “minor.” “The important issue is the fact that the independence of the scientist in his findings has been challenged, that a gross injustice has been done, and that scientific work in the government has been placed in jeopardy,” the editorial stated.

Two black and white portrait photos of men in suits. National Bureau of Standards director Edward Condon [left] resigned in 1951 because investigations into his political beliefs were impeding his ability to represent the bureau. Incoming director Allen V. Astin [right] inherited the AD-X2 controversy, which eventually led to Astin’s dismissal and then his reinstatement after a large-scale protest by NBS researchers and others. National Institute of Standards and Technology Digital Collections

Clearly, AD-X2’s effectiveness was no longer the central issue. The controversy was a stand-in for a larger debate concerning the role of government in supporting small business, the use of science in making policy decisions, and the independence of researchers. Over the previous few years, highly respected scientists, including Edward Condon and J. Robert Oppenheimer, had been repeatedly investigated for their political beliefs. The request for Astin’s resignation was yet another government incursion into scientific freedom.

Weeks, realizing his mistake, temporarily reinstated Astin on 17 April 1953, the day the resignation was supposed to take effect. He also asked the National Academy of Sciences to test AD-X2 in both the lab and the field. By the time the academy’s report came out in October 1953, Weeks had permanently reinstated Astin. The report, unsurprisingly, concluded that NBS was correct: AD-X2 had no merit. Science had won.

NIST makes a movie

On 9 December 2023, NIST released the 20-minute docudrama The AD-X2 Controversy. The film won the Best True Story Narrative and Best of Festival at the 2023 NewsFest Film Festival. I recommend taking the time to watch it.

The AD-X2 Controversy www.youtube.com

Many of the actors are NIST staff and scientists, and they really get into their roles. Much of the dialogue comes verbatim from primary sources, including congressional hearings and contemporary newspaper accounts.

Despite being an in-house production, NIST’s film has a Hollywood connection. The film features brief interviews with actors John and Sean Astin (of Lord of The Rings and Stranger Things fame)—NBS director Astin’s son and grandson.

The AD-X2 controversy is just as relevant today as it was 70 years ago. Scientific research, business interests, and politics remain deeply entangled. If the public is to have faith in science, it must have faith in the integrity of scientists and the scientific method. I have no objection to science being challenged—that’s how science moves forward—but we have to make sure that neither profit nor politics is tipping the scales.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the August 2024 print issue as “The AD-X2 Affair.”

References


I first heard about AD-X2 after my IEEE Spectrum editor sent me a notice about NIST’s short docudrama The AD-X2 Controversy, which you can, and should, stream online. NIST held a colloquium on 31 July 2018 with John Astin and his brother Alexander (Sandy), where they recalled what it was like to be college students when their father’s reputation was on the line. The agency has also compiled a wonderful list of resources, including many of the primary source government documents.

The AD-X2 controversy played out in the popular media, and I read dozens of articles following the almost daily twists and turns in the case in the New York Times, Washington Post, and Science.

I found Elio Passaglia’s A Unique Institution: The National Bureau of Standards 1950-1969 to be particularly helpful. The AD-X2 controversy is covered in detail in Chapter 2: Testing Can Be Troublesome.

A number of graduate theses have been written about AD-X2. One I consulted was Samuel Lawrence’s 1958 thesis “The Battery AD-X2 Controversy: A Study of Federal Regulation of Deceptive Business Practices.” Lawrence also published the 1962 book The Battery Additive Controversy.


A Map of the Metaverse

28 September 2021 at 00:19
A Map of the Metaverse

I want a map of the Metaverse.

I want that moment in Red Dead Redemption when, headed towards Saint-Denis on horseback, you mistakenly take a left turn before Rhodes and find yourself in the deep swamp, discovering little shacks or facing the terrifying jaws of a crocodile.

You check the map again (the one at the top of this post), and realize you've ended up deep in the marsh, north of where you intended.

You adjust course for the bright lights of St Denis where you'll maybe have a shave and a haircut, a night playing poker.

I want to explore the Metaverse in a spaceship. Like Stellaris, I want to be able to zoom in and out - from the galaxy level and then down to individual solar systems, planets.

A Map of the Metaverse

Or maybe I'd want to board a pirate ship - and in the distance a cluster of islands beckons, the shimmering glimpse of...are those apes? Are they really drinking martinis at the beach?

I want a map of the Metaverse because I like the idea of long, slow journeys. I like the idea of geography being revealed, of being immersed in a place, of decoding the pathways and history.

I like the idea of serendipity. Of discovery.

I like the idea that islands or planets would be grouped together - today, a cluster of Star Wars themed planets, tomorrow a group of corporate islands where I go to attend conferences on bitcoin or whatever.

How We Get There: Travel In the Metaverse

It sort of makes sense, doesn't it?

The Metaverse is being pitched as the next generation of the Internet. It will be spatial, persistent, three-dimensional and interoperable.

Which means that we’ll attend a concert in the newly interoperable Fortnite, jump over to hang out with the Bored Apes, regroup with our team in some new Facebook conference room. All without needing a separate download or a new account for each space that we enter.

In short it's, well, a universe - just a “meta” one.

Surely it has a geography?

Yes, each world within that universe may have its own map. Fornite OpenIsland will have a map that's different from Decentraland.

But wouldn’t you expect that these worlds are...connected? Wouldn't you expect continents, maybe? A Star Wars constellation of stars?

Probably. Otherwise isn't it just a more 3D version of the Web?

But the concept of a map of the Metaverse highlights some of the profound challenges in how our shared future universe is shaped.

Building a Map: A Thought Experiment

Here’s one version of what a map of the Metaverse could be. This isn’t a proposal, really…it’s a thought experiment.

  1. A new “meta domain” layer is created which serves as a map of the Metaverse. In theory, the map itself could be three-dimensional, but for now let’s think of it as a giant blank grid. Each point on that grid holds metadata: the URL of the world it contains, maybe even 3D objects showing what they look like from a distance.
  2. Worlds are registered on this map by their owners. They choose the placement and the size. The larger the space you decide to occupy (in order, in theory, to get more traffic - or as a way to contain multiple entry points), the more expensive it is. So, registering a single square might cost you $10. But each adjacent square costs a 4-fold amount. Two squares = $10 + $40. Four squares = $10 + $40 + $160 + $640 etc
  3. You can MOVE your squares for a fee. The fee increases based on the frequency with which you move it. This will encourage ‘worlds’ to move into clusters, while discouraging over-frequent ‘parking’.
  4. The map is based on blockchain so that all of the placements and transactions are open and transparent.
  5. The base map has an API. Anyone can build on top of the base map. So, if someone wants to create a space-themed version of traveling across the map they can. Each map builder might find new ways to monetize their map: one might add an entertainment layer on top and charge worlds for adding icons or whatever.
  6. How each map maker represents travel between the worlds is up to them.
  7. The map starts out relatively small. It grows (maybe additional 'rings' are added to the core map, extending its size) based on density.

Finally, the fees would be collected by a non-profit DAO. These fees would fund the base infrastructure of the map of the Metaverse, and in addition would go towards:

  1. Open Metaverse standards and best practices. The Open Metaverse Initiative, for example, might be one of the bodies that receives funding from the DAO
  2. Metaverse safety and privacy research.
  3. Policy and legal advocacy. Initiatives that focus on lobbying governments.

[As a side note, Facebook is spending considerable effort and money on lobbying government on Metaverse standards. Do we really want Facebook as the organization driving future standards?]

For the user, there is now a way to visualize the Metaverse. "Worlds" which add themselves to the map are making a statement: "we want to be part of this larger, interoperable universe...your avatar, your inventory and your wallet are welcome here".

As a user, you can travel through the Metaverse using the interfaces by the companies who build on top of the base map: one of them is a space-theme, one of them is corporate, and maybe they charge you for premium skins or for premium data layers.

Over time, the map might grow to be so large that specialized continents or map layers help us to navigate through it based on interests.

A sense of history will emerge: those few core worlds at the center and then spiralling galaxies spinning off. Entire continents for socializing, entire solar systems devoted to Bored Apes.

Like the Wayback Machine, the map of the Metaverse is stored, its evolution instantly viewable because data is on the blockchain.

A Slower, Less Siloed Metaverse

As I say, this isn’t meant to be a proposal. It’s a thought experiment which lets us explore whether there are different ways to envision how users will travel through the Metaverse.

I was trying to think through a few things:

  • How do we let the community self-organize?
  • Can we encourage the kind of serendipity that used to be way more common on the Web? Can we find ways to discover new digital content that doesn’t rely solely on whatever we see on social media?
  • Can a map help us to slow down? What’s the future equivalent of doom-scrolling in the Metaverse? If there is more of a sense of travel…of journeys…can we help to create more human interactions and serendipity?
  • Can a map allow for cultures to emerge and flourish? Can we create little corners of the Metaverse for different forms of self-expression?
  • Can we find new ways to monetize ‘traffic’? If all we end up with are links and teleport hubs….isn’t that the same model that led to massive data silos like Google Search and social media as a main driver of traffic?
  • How will governance happen in the Metaverse? In addition to our avatars carrying around 'permissions', how will things like violence or kid-friendly spaces self-organize? Can this happen in a way that avoids huge data silos?

The Challenges of a Map

But even this thought experiment quickly bumps up against bigger questions.

A ‘Map of the Metaverse’ circles us back to questions about how it might best be constructed, what its boundaries are, and how it will be governed:

We want interoperability - but does this apply to everything? A lot of effort, for example, is being invested in 'universal' avatars. Whether you're grabbing an avatar from Ready Player Me or maybe one of the super secret CloneX NFT Avatars from RTFTK you're going to want to..well, to be YOU, right? But what happens when you drop into a Star Wars world? Will it require a dress code? Or what about your inventory? Will you be allowed to bring a gun to a knife fight?

Thinking about a map of the Metaverse is also a way of thinking about how we'll transition between spaces. I might be travelling the Metaverse in a spaceship but, like a crew member in Star Trek, I might need to assume the local culture and costume in order to 'do no harm'.

What effect do links and teleportation hubs have on the aggregation of audiences? Most of the current work on the Metaverse assumes some sort of 3D-style URL. It makes sense: a URL is really just a way to request content from some distant server. And so a Metaverse-URL is a way for a user's machine to request a 'world' from a distant server. But doesn't this also risk all of the same traffic-shaping and user-tracking woes of the past? Don't we just end up resorting to search giants and social media portals as our entryways into the Metaverse? What would a more community-focused 'search' look like? Might it look a bit like a map?

Not all worlds will look alike. In fact, a world might be a little room where you show off 3D scans of your cat. To the degree that the little room can be linked to other little rooms - is it a world? Maps create a challenge that way: you can't necessarily map the actual size of a world onto a 'meta-map'. But then if all we have are a bunch of separate 3D spatial experiences - will there be a Metaverse at all? Or is it just a 3D Web? A 3D Web supported by a bunch of optional standards, maybe, but not really what people mean by the next version of the Internet (or, indeed, the Metaverse).

Will 3D experiences bridge the physical world? By most definitions, the Metaverse encompasses AR, VR, mobile, etc. And it probably should! Computers don't care whether a 'world' is real or not. They roughly interpret spatial relations the same way. And as a user, I might want to attend an Arianna Grande concert - and do so either by logging in to a fully immersive world, or by having her pop up in my living room. For the developer, a single source of truth can be delivered to multiple devices and interfaces.

I mentioned previously that I think of Sketchfab as a headless CMS for the Metaverse. By which I meant that Metaverse content will often be separate from the delivery interfaces. If the Metaverse doesn't priviledge a particular interface, then AR devices earn the right to be included. But how would a map of the Metaverse apply when it's distributed into reality itself?

Do You Want to Browse...Or Travel?

My little mental exercise on maps opened up all kinds of questions: about standards, governance, user experiences, and whether we're setting out to truly create spatially connected worlds, or we're creating a bunch of worlds that are only loosely connected.

But it also had me realizing that there is a range of possible futures. We don't necessarily need to choose: we can both browse, teleporting into little 3D spaces from Instagram, say, and also travel - setting sail in my imagined ship and discovering new worlds, delighting in the serendipity of a new universe.

Do you want to browse...or travel? And if both, what are the circumstances that would have you choose?

And most of all: do you want the Metaverse to evolve as a grassroots, community-led, cultural phenomenon....or do you want it to end up on the same path that brought us to the Web as it is today, dominated by a few silos, governed by control and measurement of our clicks?

The way that we develop a map of the Metaverse might not prevent us from the more dystopian future that fiction warns us against - but at least thinking about it lets us ask how to arrive, like my poor tired horse, in the well-lit streets of St Denis, ready for a bath, a shave, and a rest from the weary trail.


We're in this together. I'd love to hear your thoughts.

Email me at doug@bureauofbrightideas.com or message me on Twitter.

Let's start a conversation.


❌
❌