Investment powerhouse BlackRock is set to launch a massive AI-focused fund, exceeding $30 billion, in collaboration with Microsoft and the Abu Dhabi-backed investment outfit MGX, the FT reported today. According to the outlet, the fund — among Wall Street’s largest — will focus on creating data centers and funding energy infrastructure to support AI. Chip […]
The Unicode Consortium has finalized and released version 16.0 of the Unicode standard, the elaborate character set that ensures that our phones, tablets, PCs, and other devices can all communicate and interoperate with each other. The update adds 5,185 new characters to the standard, bringing the total up to a whopping 154,998.
Of those 5,185 characters, the ones that will get the most attention are the eight new emoji characters, including a shovel, a fingerprint, a leafless tree, a radish (formally classified as "root vegetable"), a harp, a purple splat that evokes the '90s Nickelodeon logo, and a flag for the island of Sark. The standout, of course, is "face with bags under eyes," whose long-suffering thousand-yard stare perfectly encapsulates the era it has been born into. Per usual, Emojipedia has sample images that give you some idea of what these will look like when they're implemented by various operating systems, apps, and services.
Microsoft is laying off around 650 employees from its gaming division, according to an internal memo shared online by IGN. The latest cuts come eight months after the company laid off 1,900 in its gaming division, following its $68.7 billion acquisition of Activision Blizzard. In Xbox chief Phil Spencer’s memo to staff, he notes that […]
Microsoft has updated a key cryptographic library with two new encryption algorithms designed to withstand attacks from quantum computers.
The updates were made last week to SymCrypt, a core cryptographic code library for handing cryptographic functions in Windows and Linux. The library, started in 2006, provides operations and algorithms developers can use to safely implement secure encryption, decryption, signing, verification, hashing, and key exchange in the apps they create. The library supports federal certification requirements for cryptographic modules used in some governmental environments.
Massive overhaul underway
Despite the name, SymCrypt supports both symmetric and asymmetric algorithms. It’s the main cryptographic library Microsoft uses in products and services including Azure, Microsoft 365, all supported versions of Windows, Azure Stack HCI, and Azure Linux. The library provides cryptographic security used in email security, cloud storage, web browsing, remote access, and device management. Microsoft documented the update in a post on Monday.
According to a report from Korean tech outlet The Elec, Microsoft has contracted Samsung to supply micro OLED display panels for what is described as “next-generation mixed reality devices.”
Citing industry sources, the report maintains the order could reach into the “hundreds of thousands” of micro OLED displays, with such a Microsoft XR device reportedly slated to arrive as early as 2026.
Unlike Meta’s current line of Quest headsets, the alleged Microsoft headset will be used for “enjoying or watching content such as games or movies rather than the metaverse,” the report maintains (machine translated from Korean), potentially putting it in competition with Apple Vision Pro.
Since Microsoft’s abandonment of its WMR platform late last year and ongoing stagnation around its HoloLens AR platform, the company has mostly concentrated on smaller XR software projects.
Since the release of Vision Pro earlier this year however, competing—or at least preparing to compete—with Apple seems to be the order of the day.
Samsung and Google confirmed in July their forthcoming “XR platform” will be announced sometime this year. The ‘platform’, which is thought to be hardware built by Samsung and an Android XR operating system built by Google, was previously reportedly delayed in effort to better compete with Vision Pro.
A version of this post originally appeared on Tedium, Ernie Smith’s newsletter, which hunts for the end of the long tail.
These days, computer users take collaboration software for granted. Google Docs, Microsoft Teams, Slack, Salesforce, and so on, are such a big part of many people’s daily lives that they hardly notice them. But they are the outgrowth of years of hard work done before the Internet became a thing, when there was a thorny problem: How could people collaborate effectively when everyone’s using a stand-alone personal computer?
In the early days of the computing era, when IBM’s PC reigned supreme, collaboration was difficult. Ross Anthony Willis/Fairfax Media/Getty Images
How the PC made us forget about collaboration for a while
Imagine that it’s the early-to-mid-1980s and that you run a large company. You’ve invested a lot of money into personal computers, which your employees are now using—IBM PCs, Apple Macintoshes, clones, and the like. There’s just one problem: You have a bunch of computers, but they don’t talk to one another.
If you’re in a small office and need to share a file, it’s no big deal: You can just hand a floppy disk off to someone on the other side of the room. But what if you’re part of an enterprise company and the person you need to collaborate with is on the other side of the country? Passing your colleague a disk doesn’t work.
The new personal-computing technologies clearly needed to do more to foster collaboration. They needed to be able to take input from a large group of people inside an office, to allow files to be shared and distributed, and to let multiple users tweak and mash information with everyone being able to sign off on the final version.
The hardware that would enable such collaboration software, or “groupware” as it tended to be called early on, varied by era. In the 1960s and ’70s, it was usually a mainframe-to-terminal setup, rather than something using PCs. Later, in the 1980s, it was either a
token ring or Ethernet network, which were competing local-networking technologies. But regardless of the hardware used for networking, the software for collaboration needed to be developed.
Stanford Research Institute engineer Douglas Engelbart is sometimes called “the father of groupware.”Getty Images
Some of the basic ideas behind groupware
were first forged at the Stanford Research Institute by a Douglas Engelbart–led team, in the 1960s, working on what they called an oN-Line System (NLS). An early version of NLS was presented in 1968 during what became known as the “Mother of All Demos.” It was essentially a coming-out party for many computing innovations that would eventually become commonplace. If you have 90 minutes and want to see something 20-plus years ahead of its time, watch this video.
By the late 1980s, at a point when the PC had begun to dominate the workplace, Engelbart was less impressed with what had been gained than with what had been lost in the process. He
wrote (with Harvey Lehtman) in Byte magazine in 1988:
The emergence of the personal computer as a major presence in the 1970s and 1980s led to tremendous increases in personal productivity and creativity. It also caused setbacks in the development of tools aimed at increasing organizational effectiveness—tools developed on the older time-sharing systems.
To some extent, the personal computer was a reaction to the overloaded and frustrating time-sharing systems of the day. In emphasizing the power of the individual, the personal computer revolution turned its back on those tools that led to the empowering of both co-located and distributed work groups collaborating simultaneously and over time on common knowledge work.
The introduction of local- and wide-area networks into the personal computer environment and the development of mail systems are leading toward some of the directions explored on the earlier systems. However, some of the experiences of those earlier pioneering systems should be considered anew in evolving newer collaborative environments.
Groupware finally started to catch on in the late 1980s, with tech companies putting considerable resources into developing collaboration software—perhaps taken in by the idea of “orchestrating work teams,” as
an Infoworld piece characterized the challenge in 1988. The San Francisco Examiner reported, for example, that General Motors had invested in the technology, and was beginning to require its suppliers to accept purchase orders electronically.
Focusing on collaboration software was a great way for independent software companies to stand out, this being an area that large companies—Microsoft in particular—had basically ignored. Today, Microsoft is the 800-pound gorilla of collaboration software, thanks to its combination of Teams and Office 365. But it took the tech giant a very long while to get there: Microsoft started taking the market seriously only around 1992.
One company in particular was well-positioned to take advantage of the opening that existed in the 1980s. That was the
Lotus Development Corporation, a Cambridge, Mass.–based software company that made its name with its Lotus 1-2-3 spreadsheet program for IBM PCs.
Lotus did not invent groupware or coin the word—on top of Engelbart’s formative work at Stanford, the term
had been around for years before Lotus Notes came on the scene. But it was the company that brought collaboration software to everyone’s attention.
Ray Ozzie [left] was primarily responsible for the development of Lotus Notes, the first popular groupware solution.
Left: Ann E. Yow-Dyson/Getty Images; Right: James Keyser/Getty Images
The person most associated with the development of Notes was
Ray Ozzie, who was recruited to Lotus after spending time working on VisiCalc, an early spreadsheet program. Ozzie essentially built out what became Notes while working at Iris Associates, a direct offshoot of Lotus that Ozzie founded to develop the Notes application. After some years of development in stealth mode, the product was released in 1989.
Ozzie explained his inspiration for Notes to Jessica Livingston, who described this history in her book,
Founders At Work:
In Notes, it was (and this is hard to imagine because it was a different time) the concept that we’d all be using computers on our desktops, and therefore we might want to use them as communication tools. This was a time when PCs were just emerging as spreadsheet tools and word processing replacements, still available only on a subset of desks, and definitely no networks. It was ’82 when I wrote the specs for it. It had been based on a system called PLATO [Programmed Logic for Automatic Teaching Operations] that I’d been exposed to at college, which was a large-scale interactive system that people did learning and interactive gaming on, and things like that. It gave us a little bit of a peek at the future—what it would be like if we all had access to interactive systems and technology.
Building an application based on PLATO turned out to be the right idea at the right time, and it gave Lotus an edge in the market. Notes included email, a calendaring and scheduling tool, an address book, a shared database, and programming capabilities, all in a single front-end application.
Lotus Notes on Computer Chronicles Fall 1989
As an all-in-one platform built for scale, Notes
gained a strong reputation as an early example of what today would be called a business-transformation tool, one that managed many elements of collaboration. It was complicated from an IT standpoint and required a significant investment to maintain. In a way, what Notes did that was perhaps most groundbreaking was that it helped turn PCs into something that large companies could readily use.
As Fortune noted in 1994, Lotus had a massive lead in the groupware space, in part because the software worked essentially the same anywhere in a company’s network. We take that for granted now, but back then it was considered magical:
Like Lotus 1-2-3, Notes is easy to customize. A sales organization, for instance, might use it to set up an electronic bulletin board that lets people pool information about prospective clients. If some of the info is confidential, it can be restricted so not everyone can call it up.
Notes makes such homegrown applications and the data they contain accessible throughout an organization. The electronic bulletin board you consult in Singapore is identical to the one your counterparts see in Sioux City, Iowa. The key to this universality is a procedure called replication, by which Notes copies information from computer to computer throughout the network. You might say Ozzie figured out how to make the machines telepathic—each knows what the others are thinking.
A 1996 commercial for Notes highlighted its use by FedEx. Other commercials would use the stand-up comedian Denis Leary or be highly conceptual. Rarely, if ever, would these television advertisements show the software.
In the mid-1990’s, it was common for magazines to publish stories about how Notes reshaped businesses large and small.
A 1996 Inc. piece, for example, described how a natural-foods company successfully produced a new product in just eight months, a feat the company directly credited to Notes.
“It’s become our general manager,” Groveland Trading Co. president Steve McDonnell recalled.
Notes wasn’t cheap (InfoWorld lists the price circa 1990 as US $62,000), and it was complicated to manage. But the positive results it enabled were immensely hard to ignore. IBM noticed and ended up buying Lotus in 1995, almost entirely to get ahold of Notes. Even earlier, Microsoft had realized that office collaboration was a big deal, and they wanted in.
Microsoft’s first foray into collaboration software was its 1992 release of Windows for Workgroups. Despite great efforts to promote the release, the software was not a commercial success. Daltrois/Flickr
Microsoft had high hopes for
Windows for Workgroups, the networking-focused variant of its popular Windows 3.1 software suite. To create buzz for it, the company pulled out all the stops. Seriously.
In the fall of 1992, Microsoft
paid something like $2 million to put on a Broadway production with Bill Gates literally center stage, at New York City’s Gershwin Theater, one of the largest on Broadway. It was a wild show, and yet, somehow, there is no video of this event currently posted online—until now. The only person I know of who has a video recording of this extravaganza is, fittingly enough, Ray Ozzie, the groupware guru and Notes inventor. Ozzie later served as a top executive at Microsoft, famously replacing Bill Gates as Chief Software Architect in the mid-2000s, and he has shared this video with us for this post:
The 1992 one-day event was not a hit. Watch to see why. (Courtesy of Ray Ozzie and the Microsoft Corporation)
00:00 Opening number
02:23 “My VGA can hardly wait for your CPU to reciprocate”
05:17 Bill Gates enters the stage
27:55 “Get ready, get set” musical number
31:50 Bit with Mike Appe, Microsoft VP of sales
58:30 Bill Gates does jumping jacks
A 1992 Washington Post article describes the performance, which involved dozens of actors, some of whom were dressed like the Blues Brothers. At one point, Gates did jumping jacks. Gates himself later said, “That was so bad, I thought [then Microsoft CEO] Ballmer was going to retch.” For those who don’t have an extra hour to spend, here is a summary:
To get a taste of the show, watch this news segment from channel 4.
Courtesy of Microsoft Corporate Archives
Despite all the effort to generate fanfare, Windows for Workgroups was
not a hit. While Windows 3.1 was dominant, Microsoft had built a program that didn’t seem to capture the burgeoning interest in collaborative work in a real way. Among other things, it didn’t initially support the TCP/IP networking protocol, despite the fact that it was the networking technology that was winning the market and enabled the rise of the Internet.
In its original version, Windows for Workgroups carried such a negative reputation in Microsoft’s own headquarters that the company nicknamed it
Windows for Warehouses, referring to the company’s largely unsold inventory, according to Microsoft’s own expert on company lore, Raymond Chen.
Unsuccessful as it was, the fact that it existed in the first place hinted at Microsoft’s general acknowledgement that perhaps this networking thing was going to catch on with its users.
Launched in late 1992, a few months after Windows 3.1 itself, the product was Microsoft’s
first attempt at integrated networking in a Windows package. The software enabled file-sharing across servers, printer sharing, and email—table stakes in the modern day but at the time a big deal.
This video presents a very accurate view of what it was like to use Windows in 1994.
Unfortunately, it was a big deal that came a few years late. Microsoft itself was
so lukewarm on the product that the company had to update it to Windows for Workgroups 3.11 just a year later, whose marquee feature wasn’t improved network support but increased disk speed. Confusingly, the company had just released Windows NT by this point, a program that better matched the needs of enterprise customers.
The work group terminology Microsoft introduced with Windows for Workgroups stuck around, though, and it is actually
used in Windows to this day.
In 2024, group-oriented software feels like the default paradigm, with single-user apps being the anomaly. Over time, groupware became so pervasive that people no longer think of it as groupware, though there are plenty of big, hefty, groupware-like tools out there, like
Salesforce. Now, it’s just software. But no one should forget the long history of collaboration software or its ongoing value. It’s what got most of us through the pandemic, even if we never used the word “groupware” to describe it.
Tech companies have been caught up in a race to build the biggest large language models (LLMs). In April, for example, Meta announced the 400-billion-parameter Llama 3, which contains twice the number of parameters—or variables that determine how the model responds to queries—than OpenAI’s original ChatGPT model from 2022. Although not confirmed, GPT-4 is estimated to have about 1.8 trillion parameters.
In the last few months, however, some of the largest tech companies, including Apple and Microsoft, have introduced small language models (SLMs). These models are a fraction of the size of their LLM counterparts and yet, on many benchmarks, can match or even outperform them in text generation.
On 10 June, at Apple’s Worldwide Developers Conference, the company announced its “Apple Intelligence” models, which have around 3 billion parameters. And in late April, Microsoft released its Phi-3 family of SLMs, featuring models housing between 3.8 billion and 14 billion parameters.
OpenAI’s CEO Sam Altman believes we’re at the end of the era of giant models.
In a series of tests, the smallest of Microsoft’s models, Phi-3-mini, rivalled OpenAI’s GPT-3.5 (175 billion parameters), which powers the free version of ChatGPT, and outperformed Google’s Gemma (7 billion parameters). The tests evaluated how well a model understands language by prompting it with questions about mathematics, philosophy, law, and more. What’s more interesting, Microsoft’s Phi-3-small, with 7 billion parameters, fared remarkably better than GPT-3.5 in many of these benchmarks.
Aaron Mueller, who researches language models at Northeastern University in Boston, isn’t surprised SLMs can go toe-to-toe with LLMs in select functions. He says that’s because scaling the number of parameters isn’t the only way to improve a model’s performance: Training it on higher-quality data can yield similar results too.
Microsoft’s Phi models were trained on fine-tuned “textbook-quality” data, says Mueller, which have a more consistent style that’s easier to learn from than the highly diverse text from across the Internet that LLMs typically rely on. Similarly, Apple trained its SLMs exclusively on richer and more complex datasets.
The rise of SLMs comes at a time when the performance gap between LLMs is quickly narrowing and tech companies look to deviate from standard scaling laws and explore other avenues for performance upgrades. At an event in April, OpenAI’s CEO Sam Altman said he believes we’re at the end of the era of giant models. “We’ll make them better in other ways.”
Because SLMs don’t consume nearly as much energy as LLMs, they can also run locally on devices like smartphones and laptops (instead of in the cloud) to preserve data privacy and personalize them to each person. In March, Google rolled out Gemini Nano to the company’s Pixel line of smartphones. The SLM can summarize audio recordings and produce smart replies to conversations without an Internet connection. Apple is expected to follow suit later this year.
More importantly, SLMs can democratize access to language models, says Mueller. So far, AI development has been concentrated into the hands of a couple of large companies that can afford to deploy high-end infrastructure, while other, smaller operations and labs have been forced to license them for hefty fees.
Since SLMs can be easily trained on more affordable hardware, says Mueller, they’re more accessible to those with modest resources and yet still capable enough for specific applications.
In addition, while researchers agree there’s still a lot of work ahead to overcome hallucinations, carefully curated SLMs bring them a step closer toward building responsible AI that is also interpretable, which would potentially allow researchers to debug specific LLM issues and fix them at the source.
For researchers like Alex Warstadt, a computer science researcher at ETH Zurich, SLMs could also offer new, fascinating insights into a longstanding scientific question: How children acquire their first language. Warstadt, alongside a group of researchers including Northeastern’s Mueller, organizes BabyLM, a challenge in which participants optimize language-model training on small data.
Not only could SLMs potentially unlock new secrets of human cognition, but they also help improve generative AI. By the time children turn 13, they’re exposed to about 100 million words and are better than chatbots at language, with access to only 0.01 percent of the data. While no one knows what makes humans so much more efficient, says Warstadt, “reverse engineering efficient humanlike learning at small scales could lead to huge improvements when scaled up to LLM scales.”
Microsoft recently caught state-backed hackers using its generative AI tools to help with their attacks. In the security community, the immediate questions weren’t about how hackers were using the tools (that was utterly predictable), but about how Microsoft figured it out. The natural conclusion was that Microsoft was spying on its AI users, looking for harmful hackers at work.
Some pushed back at characterizing Microsoft’s actions as “spying.” Of course cloud service providers monitor what users are doing. And because we expect Microsoft to be doing something like this, it’s not fair to call it spying.
We see this argument as an example of our shifting collective expectations of privacy. To understand what’s happening, we can learn from an unlikely source: fish.
In the mid-20th century, scientists began noticing that the number of fish in the ocean—so vast as to underlie the phrase “There are plenty of fish in the sea”—had started declining rapidly due to overfishing. They had already seen a similar decline in whale populations, when the post-WWII whaling industry nearly drove many species extinct. In whaling and later in commercial fishing, new technology made it easier to find and catch marine creatures in ever greater numbers. Ecologists, specifically those working in fisheries management, began studying how and when certain fish populations had gone into serious decline.
One scientist, Daniel Pauly, realized that researchers studying fish populations were making a major error when trying to determine acceptable catch size. It wasn’t that scientists didn’t recognize the declining fish populations. It was just that they didn’t realize how significant the decline was. Pauly noted that each generation of scientists had a different baseline to which they compared the current statistics, and that each generation’s baseline was lower than that of the previous one.
What seems normal to us in the security community is whatever was commonplace at the beginning of our careers.
Pauly called this “shifting baseline syndrome” in a 1995 paper. The baseline most scientists used was the one that was normal when they began their research careers. By that measure, each subsequent decline wasn’t significant, but the cumulative decline was devastating. Each generation of researchers came of age in a new ecological and technological environment, inadvertently masking an exponential decline.
Pauly’s insights came too late to help those managing some fisheries. The ocean suffered catastrophes such as the complete collapse of the Northwest Atlantic cod population in the 1990s.
Internet surveillance, and the resultant loss of privacy, is following the same trajectory. Just as certain fish populations in the world’s oceans have fallen 80 percent, from previously having fallen 80 percent, from previously having fallen 80 percent (ad infinitum), our expectations of privacy have similarly fallen precipitously. The pervasive nature of modern technology makes surveillance easier than ever before, while each successive generation of the public is accustomed to the privacy status quo of their youth. What seems normal to us in the security community is whatever was commonplace at the beginning of our careers.
Historically, people controlled their computers, and software was standalone. The always-connected cloud-deployment model of software and services flipped the script. Most apps and services are designed to be always-online, feeding usage information back to the company. A consequence of this modern deployment model is that everyone—cynical tech folks and even ordinary users—expects that what you do with modern tech isn’t private. But that’s because the baseline has shifted.
AI chatbots are the latest incarnation of this phenomenon: They produce output in response to your input, but behind the scenes there’s a complex cloud-based system keeping track of that input—both to improve the service and to sell you ads.
Shifting baselines are at the heart of our collective loss of privacy. The U.S. Supreme Court has long held that our right to privacy depends on whether we have a reasonable expectation of privacy. But expectation is a slippery thing: It’s subject to shifting baselines.
The question remains: What now? Fisheries scientists, armed with knowledge of shifting-baseline syndrome, now look at the big picture. They no longer consider relative measures, such as comparing this decade with the last decade. Instead, they take a holistic, ecosystem-wide perspective to see what a healthy marine ecosystem and thus sustainable catch should look like. They then turn these scientifically derived sustainable-catch figures into limits to be codified by regulators.
In privacy and security, we need to do the same. Instead of comparing to a shifting baseline, we need to step back and look at what a healthy technological ecosystem would look like: one that respects people’s privacy rights while also allowing companies to recoup costs for services they provide. Ultimately, as with fisheries, we need to take a big-picture perspective and be aware of shifting baselines. A scientifically informed and democratic regulatory process is required to preserve a heritage—whether it be the ocean or the Internet—for the next generation.