Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Delivering the next-generation barcode

The world’s first barcode, designed in 1948, took more than 25 years to make it out of the lab and onto a retail package. Since then, the barcode has done much more than make grocery checkouts faster—it has remade our understanding of how physical objects can be identified and tracked, creating a new pace and set of expectations for the speed and reliability of modern commerce.

Nearly eighty years later, a new iteration of that technology, which encodes data in two dimensions, is poised to take the stage. Today’s 2D barcode is not only out of the lab but “open to a world of possibility,” says Carrie Wilkie, senior vice president of standards and technology at GS1 US.

2D barcodes encode substantially more information than their 1D counterparts. This enables them to link physical objects to a wide array of digital resources. For consumers, 2D barcodes can provide a wealth of product information, from food allergens, expiration dates, and safety recalls to detailed medication use instructions, coupons, and product offers. For businesses, 2D barcodes can enhance operational efficiencies, create traceability at the lot or item level, and drive new forms of customer engagement.

An array of 2D barcode types supports the information needs of a variety of industries. The GS1 DataMatrix, for example, is used on medication or medical devices, encoding expiration dates, batch and lot numbers, and FDA National Drug Codes. The QR Code is familiar to consumers who have used one to open a website from their phone. Adding a GS1 Digital Link URI to a QR Code enables it to serve two purposes: as both a traditional barcode for supply chain operations, enabling tracking throughout the supply chain and price lookup at checkout, and also as a consumer-facing link to digital information, like expiry dates and serial numbers.

Regardless of type, however, all 2D barcodes require a business ecosystem backed by data. To capture new value from advanced barcodes, organizations must supply and manage clean, accurate, and interoperable data around their products and materials. For 2D barcodes to deliver on their potential, businesses will need to collaborate with partners, suppliers, and customers and commit to common data standards across the value chain.

Driving the demand for 2D barcodes

Shifting to 2D barcodes—and enabling the data ecosystems behind them—will require investment by business. Consumer engagement, compliance, and sustainability are among the many factors driving this transition.

Real-time consumer engagement: Today’s customers want to feel connected to the brands they interact with and purchase from. Information is a key element of that engagement and empowerment. “When I think about customer satisfaction,” says Leslie Hand, group vice president for IDC Retail Insights, “I’m thinking about how I can provide more information that allows them to make better decisions about their own lives and the things they buy.”

2D barcodes can help by connecting consumers to online content in real time. “If, by using a 2D barcode, you have the capability to connect to a consumer in a specific region, or a specific store, and you have the ability to provide information to that consumer about the specific product in their hand, that can be a really powerful consumer engagement tool,” says Dan Hardy, director of customer operations for HanesBrands, Inc. “2D barcodes can bring brand and product connectivity directly to an individual consumer, and create an interaction that supports your brand message at an individual consumer/product level.”

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Chasing AI’s value in life sciences

Inspired by an unprecedented opportunity, the life sciences sector has gone all in on AI. For example, in 2023, Pfizer introduced an internal generative AI platform expected to deliver $750 million to $1 billion in value. And Moderna partnered with OpenAI in April 2024, scaling its AI efforts to deploy ChatGPT Enterprise, embedding the tool’s capabilities across business functions from legal to research.

In drug development, German pharmaceutical company Merck KGaA has partnered with several AI companies for drug discovery and development. And Exscientia, a pioneer in using AI in drug discovery, is taking more steps toward integrating generative AI drug design with robotic lab automation in collaboration with Amazon Web Services (AWS).

Given rising competition, higher customer expectations, and growing regulatory challenges, these investments are crucial. But to maximize their value, leaders must carefully consider how to balance the key factors of scope, scale, speed, and human-AI collaboration.

The early promise of connecting data

The common refrain from data leaders across all industries—but specifically from those within data-rich life sciences organizations—is “I have vast amounts of data all over my organization, but the people who need it can’t find it.” says Dan Sheeran, general manager of health care and life sciences for AWS. And in a complex healthcare ecosystem, data can come from multiple sources including hospitals, pharmacies, insurers, and patients.

“Addressing this challenge,” says Sheeran, “means applying metadata to all existing data and then creating tools to find it, mimicking the ease of a search engine. Until generative AI came along, though, creating that metadata was extremely time consuming.”

ZS’s global head of the digital and technology practice, Mahmood Majeed notes that his teams regularly work on connected data programs, because “connecting data to enable connected decisions across the enterprise gives you the ability to create differentiated experiences.”

Majeed points to Sanofi’s well-publicized example of connecting data with its analytics app, plai, which streamlines research and automates time-consuming data tasks. With this investment, Sanofi reports reducing research processes from weeks to hours and the potential to improve target identification in therapeutic areas like immunology, oncology, or neurology by 20% to 30%.

Achieving the payoff of personalization

Connected data also allows companies to focus on personalized last-mile experiences. This involves tailoring interactions with healthcare providers and understanding patients’ individual motivations, needs, and behaviors.

Early efforts around personalization have relied on “next best action” or “next best engagement” models to do this. These traditional machine learning (ML) models suggest the most appropriate information for field teams to share with healthcare providers, based on predetermined guidelines.

When compared with generative AI models, more traditional machine learning models can be inflexible, unable to adapt to individual provider needs, and they often struggle to connect with other data sources that could provide meaningful context. Therefore, the insights can be helpful but limited.  

Sheeran notes that companies have a real opportunity to improve their ability to gain access to connected data for better decision-making processes, “Because the technology is generative, it can create context based on signals. How does this healthcare provider like to receive information? What insights can we draw about the questions they’re asking? Can their professional history or past prescribing behavior help us provide a more contextualized answer? This is exactly what generative AI is great for.”

Beyond this, pharmaceutical companies spend millions of dollars annually to customize marketing materials. They must ensure the content is translated, tailored to the audience and consistent with regulations for each location they offer products and services. A process that usually takes weeks to develop individual assets has become a perfect use case for generative copy and imagery. With generative AI, the process is reduced to from weeks to minutes and creates competitive advantage with lower costs per asset, Sheeran says.

Accelerating drug discovery with AI, one step at a time

Perhaps the greatest hope for AI in life sciences is its ability to generate insights and intellectual property using biology-specific foundation models. Sheeran says, “our customers have seen the potential for very, very large models to greatly accelerate certain discrete steps in the drug discovery and development processes.” He continues, “Now we have a much broader range of models available, and an even larger set of models coming that tackle other discrete steps.”

By Sheeran’s count, there are approximately six major categories of biology-specific models, each containing five to 25 models under development or already available from universities and commercial organizations.

The intellectual property generated by biology-specific models is a significant consideration, supported by services such as Amazon Bedrock, which ensures customers retain control over their data, with transparency and safeguards to prevent unauthorized retention and misuse.

Finding differentiation in life sciences with scope, scale, and speed

Organizations can differentiate with scope, scale, and speed, while determining how AI can best augment human ingenuity and judgment. “Technology has become so easy to access. It’s omnipresent. What that means is that it’s no longer a differentiator on its own,” says Majeed. He suggests that life sciences leaders consider:

Scope: Have we zeroed in on the right problem? By clearly articulating the problem relative to the few critical things that could drive advantage, organizations can identify technology and business collaborators and set standards for measuring success and driving tangible results.

Scale: What happens when we implement a technology solution on a large scale? The highest-priority AI solutions should be the ones with the most potential for results.Scale determines whether an AI initiative will have a broader, more widespread impact on a business, which provides the window for a greater return on investment, says Majeed.

By thinking through the implications of scale from the beginning, organizations can be clear on the magnitude of change they expect and how bold they need to be to achieve it. The boldest commitment to scale is when companies go all in on AI, as Sanofi is doing, setting goals to transform the entire value chain and setting the tone from the very top.

Speed: Are we set up to quickly learn and correct course? Organizations that can rapidly learn from their data and AI experiments, adjust based on those learnings, and continuously iterate are the ones that will see the most success. Majeed emphasizes, “Don’t underestimate this component; it’s where most of the work happens. A good partner will set you up for quick wins, keeping your teams learning and maintaining momentum.”

Sheeran adds, “ZS has become a trusted partner for AWS because our customers trust that they have the right domain expertise. A company like ZS has the ability to focus on the right uses of AI because they’re in the field and on the ground with medical professionals giving them the ability to constantly stay ahead of the curve by exploring the best ways to improve their current workflows.”

Human-AI collaboration at the heart

Despite the allure of generative AI, the human element is the ultimate determinant of how it’s used. In certain cases, traditional technologies outperform it, with less risk, so understanding what it’s good for is key. By cultivating broad technology and AI fluency throughout the organization, leaders can teach their people to find the most powerful combinations of human-AI collaboration for technology solutions that work. After all, as Majeed says, “it’s all about people—whether it’s customers, patients, or our own employees’ and users’ experiences.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Cultivating the next generation of AI innovators in a global tech hub

A few years ago, I had to make one of the biggest decisions of my life: continue as a professor at the University of Melbourne or move to another part of the world to help build a brand new university focused entirely on artificial intelligence.

With the rapid development we have seen in AI over the past few years, I came to the realization that educating the next generation of AI innovators in an inclusive way and sharing the benefits of technology across the globe is more important than maintaining the status quo. I therefore packed my bags for the Mohammed bin Zayed University of Artificial Intelligence (MBZUAI) in Abu Dhabi.

The world in all its complexity

Today, the rewards of AI are mostly enjoyed by a few countries in what the Oxford Internet Institute dubs the “Compute North.” These countries, such as the US, the U.K., France, Canada, and China, have dominated research and development, and built state of the art AI infrastructure capable of training foundational models. This should come as no surprise, as these countries are home to many of the world’s top universities and large tech corporations.

But this concentration of innovation comes at a cost for the billions of people who live outside these dominant countries and have different cultural backgrounds.

Large language models (LLMs) are illustrative of this disparity. Researchers have shown that many of the most popular multilingual LLMs perform poorly with languages other than English, Chinese, and a handful of other (mostly) European languages. Yet, there are approximately 6,000 languages spoken today, many of them in communities in Africa, Asia, and South America. Arabic alone is spoken by almost 400 million people and Hindi has 575 million speakers around the world.

For example, LLaMA 2 performs up to 50% better in English compared to Arabic, when measured using the LM-Evaluation-Harness framework. Meanwhile, Jais, an LLM co-developed by MBZUAI, exceeds LLaMA 2 in Arabic and is comparable to Meta’s model in English (see table below).

The chart shows that the only way to develop AI applications that work for everyone is by creating new institutions outside the Compute North that consistently and conscientiously invest in building tools designed for the thousands of language communities across the world.

Environments of innovation

One way to design new institutions is to study history and understand how today’s centers of gravity in AI research emerged decades ago. Before Silicon Valley earned its reputation as the center of global technological innovation, it was called Santa Clara Valley and was known for its prune farms. However, the main catalyst was Stanford University, which had built a reputation as one of the best places in the world to study electrical engineering. Over the years, through a combination of government-led investment through grants and focused research, the university birthed countless inventions that advanced computing and created a culture of entrepreneurship. The results speak for themselves: Stanford alumni have founded companies such as Alphabet, NVIDIA, Netflix, and PayPal, to name a few.

Today, like MBZUAI’s predecessor in Santa Clara Valley, we have an opportunity to build a new technology hub centered around a university.

And that’s why I chose to join MBZUAI, the world’s first research university focused entirely on AI. From MBZUAI’s position at the geographical crossroads of East and West, our goal is to attract the brightest minds from around the world and equip them with the tools they need to push the boundaries of AI research and development.

A community for inclusive AI

MBZUAI’s student body comes from more than 50 different countries around the globe. It has attracted top researchers such as Monojit Choudhury from Microsoft, Elizabeth Churchill from Google, Ted Briscoe from the University of Cambridge, Sami Haddadin from the Technical University of Munich, and Yoshihiko Nakamura from the University of Tokyo, just to name a few.

These scientists may be from different places but they’ve found a common purpose at MBZUAI with our interdisciplinary nature, relentless focus on making AI a force for global progress, and emphasis on collaboration across disciplines such as robotics, NLP, machine learning, and computer vision.

In addition to traditional AI disciplines, MBZUAI has built departments in sibling areas that can both contribute to and benefit from AI, including human computer interaction, statistics and data science, and computational biology.

Abu Dhabi’s commitment to MBZUAI is part of a broader vision for AI that extends beyond academia. MBZUAI’s scientists have collaborated with G42, an Abu Dhabi-based tech company, on Jais, an Arabic-centric LLM that is the highest-performing open-weight Arabic LLM; and also NANDA, an advanced Hindi LLM. MBZUAI’s Institute of Foundational Models has created LLM360, an initiative designed to level the playing field of large model research and development by publishing fully open source models and datasets that are competitive with closed source or open weights models available from tech companies in North America or China.

MBZUAI is also developing language models that specialize in Turkic languages, which have traditionally been underrepresented in NLP, yet are spoken by millions of people.

Another recent project has brought together native speakers of 26 languages from 28 different countries to compile a benchmark dataset that evaluates the performance of vision language models and their ability to understand cultural nuances in images.

These kinds of efforts to expand the capabilities of AI to broader communities are necessary if we want to maintain the world’s cultural diversity and provide everyone with AI tools that are useful to them. At MBZUAI, we have created a unique mix of students and faculty to drive globally-inclusive AI innovation for the future. By building a broad community of scientists, entrepreneurs, and thinkers, the university is increasingly establishing itself as a driving force in AI innovation that extends far beyond Abu Dhabi, with the goal of developing technologies that are inclusive for the world’s diverse languages and culture.

This content was produced by the Mohamed bin Zayed University of Artificial Intelligence. It was not written by MIT Technology Review’s editorial staff.

NYU Researchers Develop New Real-Time Deepfake Detection Method



This sponsored article is brought to you by NYU Tandon School of Engineering.

Deepfakes, hyper-realistic videos and audio created using artificial intelligence, present a growing threat in today’s digital world. By manipulating or fabricating content to make it appear authentic, deepfakes can be used to deceive viewers, spread disinformation, and tarnish reputations. Their misuse extends to political propaganda, social manipulation, identity theft, and cybercrime.

As deepfake technology becomes more advanced and widely accessible, the risk of societal harm escalates. Studying deepfakes is crucial to developing detection methods, raising awareness, and establishing legal frameworks to mitigate the damage they can cause in personal, professional, and global spheres. Understanding the risks associated with deepfakes and their potential impact will be necessary for preserving trust in media and digital communication.

That is where Chinmay Hegde, an Associate Professor of Computer Science and Engineering and Electrical and Computer Engineering at NYU Tandon, comes in.

A photo of a smiling man in glasses. Chinmay Hegde, an Associate Professor of Computer Science and Engineering and Electrical and Computer Engineering at NYU Tandon, is developing challenge-response systems for detecting audio and video deepfakes.NYU Tandon

“Broadly, I’m interested in AI safety in all of its forms. And when a technology like AI develops so rapidly, and gets good so quickly, it’s an area ripe for exploitation by people who would do harm,” Hegde said.

A native of India, Hegde has lived in places around the world, including Houston, Texas, where he spent several years as a student at Rice University; Cambridge, Massachusetts, where he did post-doctoral work in MIT’s Theory of Computation (TOC) group; and Ames, Iowa, where he held a professorship in the Electrical and Computer Engineering Department at Iowa State University.

Hegde, whose area of expertise is in data processing and machine learning, focuses his research on developing fast, robust, and certifiable algorithms for diverse data processing problems encountered in applications spanning imaging and computer vision, transportation, and materials design. At Tandon, he worked with Professor of Computer Science and Engineering Nasir Memon, who sparked his interest in deepfakes.

“Even just six years ago, generative AI technology was very rudimentary. One time, one of my students came in and showed off how the model was able to make a white circle on a dark background, and we were all really impressed by that at the time. Now you have high definition fakes of Taylor Swift, Barack Obama, the Pope — it’s stunning how far this technology has come. My view is that it may well continue to improve from here,” he said.

Hegde helped lead a research team from NYU Tandon School of Engineering that developed a new approach to combat the growing threat of real-time deepfakes (RTDFs) – sophisticated artificial-intelligence-generated fake audio and video that can convincingly mimic actual people in real-time video and voice calls.

High-profile incidents of deepfake fraud are already occurring, including a recent $25 million scam using fake video, and the need for effective countermeasures is clear.

In two separate papers, research teams show how “challenge-response” techniques can exploit the inherent limitations of current RTDF generation pipelines, causing degradations in the quality of the impersonations that reveal their deception.

In a paper titled “GOTCHA: Real-Time Video Deepfake Detection via Challenge-Response” the researchers developed a set of eight visual challenges designed to signal to users when they are not engaging with a real person.

“Most people are familiar with CAPTCHA, the online challenge-response that verifies they’re an actual human being. Our approach mirrors that technology, essentially asking questions or making requests that RTDF cannot respond to appropriately,” said Hegde, who led the research on both papers.

A series of images with people's faces in rows. Challenge frame of original and deepfake videos. Each row aligns outputs against the same instance of challenge, while each column aligns the same deepfake method. The green bars are a metaphor for the fidelity score, with taller bars suggesting higher fidelity. Missing bars imply the specific deepfake failed to do that specific challenge.NYU Tandon

The video research team created a dataset of 56,247 videos from 47 participants, evaluating challenges such as head movements and deliberately obscuring or covering parts of the face. Human evaluators achieved about 89 percent Area Under the Curve (AUC) score in detecting deepfakes (over 80 percent is considered very good), while machine learning models reached about 73 percent.

“Challenges like quickly moving a hand in front of your face, making dramatic facial expressions, or suddenly changing the lighting are simple for real humans to do, but very difficult for current deepfake systems to replicate convincingly when asked to do so in real-time,” said Hegde.

Audio Challenges for Deepfake Detection

In another paper called “AI-assisted Tagging of Deepfake Audio Calls using Challenge-Response,” researchers created a taxonomy of 22 audio challenges across various categories. Some of the most effective included whispering, speaking with a “cupped” hand over the mouth, talking in a high pitch, pronouncing foreign words, and speaking over background music or speech.

“Even state-of-the-art voice cloning systems struggle to maintain quality when asked to perform these unusual vocal tasks on the fly,” said Hegde. “For instance, whispering or speaking in an unusually high pitch can significantly degrade the quality of audio deepfakes.”

The audio study involved 100 participants and over 1.6 million deepfake audio samples. It employed three detection scenarios: humans alone, AI alone, and a human-AI collaborative approach. Human evaluators achieved about 72 percent accuracy in detecting fakes, while AI alone performed better with 85 percent accuracy.

The collaborative approach, where humans made initial judgments and could revise their decisions after seeing AI predictions, achieved about 83 percent accuracy. This collaborative system also allowed AI to make final calls in cases where humans were uncertain.

“The key is that these tasks are easy and quick for real people but hard for AI to fake in real-time” —Chinmay Hegde, NYU Tandon

The researchers emphasize that their techniques are designed to be practical for real-world use, with most challenges taking only seconds to complete. A typical video challenge might involve a quick hand gesture or facial expression, while an audio challenge could be as simple as whispering a short sentence.

“The key is that these tasks are easy and quick for real people but hard for AI to fake in real-time,” Hegde said. “We can also randomize the challenges and combine multiple tasks for extra security.”

As deepfake technology continues to advance, the researchers plan to refine their challenge sets and explore ways to make detection even more robust. They’re particularly interested in developing “compound” challenges that combine multiple tasks simultaneously.

“Our goal is to give people reliable tools to verify who they’re really talking to online, without disrupting normal conversations,” said Hegde. “As AI gets better at creating fakes, we need to get better at detecting them. These challenge-response systems are a promising step in that direction.”

Investing in AI to build next-generation infrastructure

The demand for new and improved infrastructure across the world is not being met. The Asian Development Bank has estimated that in Asia alone, roughly $1.7 trillion needs to be invested annually through to 2030 just to sustain economic growth and offset the effects of climate change. Globally, that figure has been put at $15 trillion.

In the US, for example, it is no secret that the country’s highways, railways and bridges are in need of updating. But similar to many other sectors, there are significant shortages in skilled workers and resources, which delays all-important repairs and maintenance and harms efficiency.

This infrastructure gap – the difference between funding and construction – is vast. And while governments and companies everywhere are feeling the strain of constructing an energy efficient and sustainable built environment, it’s proving more than humans can do alone. To redress this imbalance, many organizations are turning to various forms of AI, including large language models (LLMs) and machine learning (ML). Collectively, they are not yet able to fix all current infrastructure problems but they are already helping to reduce costs, risks, and increase efficiency.

Overcoming resource constraints

A shortage of skilled engineering and construction labor is a major problem. In the US, it is estimated that there will be a 33% shortfall in the supply of new talent by 2031, with unfilled positions in software, industrial, civil and electrical engineering. Germany reported a shortage of 320,000 science, technology, engineering, and mathematics (STEM) specialists in 2022 and another engineering powerhouse, Japan, has forecast a deficit of more than 700,000 engineers by 2030. Considering the duration of most engineering projects (repairing a broken gas pipeline for example, can take decades), the demand for qualified engineers will only continue to outstrip supply unless something is done.

Immigration and visa restrictions for international engineering students, and a lack of retention in formative STEM jobs, exert additional constraints. Plus, there is the issue of task duplication which is something AI can do with ease.

Julien Moutte, CTO of Bentley Systems explains, “There’s a massive amount of work that engineers have to do that is tedious and repetitive. Between 30% to 50% of their time is spent just compressing 3D models into 2D PDF formats. If that work can be done by AI-powered tools, they can recover half their working time which could then be invested in performing higher value tasks.”

With guidance, AI can automate the same drawings hundreds of times. Training engineers to ask the right questions and use AI optimally will ease the burden and stress of repetition.

However, this is not without challenges. Users of ChatGPT, or other LLMs, know the pitfalls of AI hallucinations, where the model can logically predict a sequence of words but without contextual understanding of what the words mean. This can lead to nonsensical outputs, but in engineering, hallucinations can sometimes be altogether more risky. “If a recommendation was made by AI, it needs to be validated,” says Moutte. “Is that recommendation safe? Does it respect the laws of physics? And it’s a waste of time for the engineers to have to review all these things.”

But this can be offset by having existing company tools and products running simulations and validating the designs using established engineering rules and design codes which again relieves the burden of having the engineers having to do the validating themselves.

Improving resource efficiency

An estimated 30% of building materials, such as steel and concrete, are wasted on a typical construction site in the United States and United Kingdom, with the majority ending up in landfills, although countries such as Germany and The Netherlands have recently implemented recycling measures. This, and the rising cost of raw materials, is putting pressure on companies to think of solutions to improve construction efficiency and sustainability.

AI can provide solutions to both of these issues during the design and construction phases. Digital twins can help workers spot deviations in product quality even and provide the insights needed to minimize waste and energy output and crucially, save money.

Machine learning models use real-time data from field statistics and process variables to flag off-spec materials, product deviations and excess energy usage, such as machinery and transportation for construction site workers. Engineers can then anticipate the gaps and streamline the processes, making large-scale overall improvements for each project which can be replicated in the future.

“Being able to anticipate and reduce that waste with that visual awareness, with the application of AI to make sure that you are optimizing those processes and those designs and the resources that you need to construct that infrastructure is massive,” says Moutte.

He continues, “The big game changer is going to be around sustainability because we need to create infrastructure with more sustainable and efficient designs, and there’s a lot of room for improvement.” And an important part of this will be how AI can help create new materials and models to reduce waste.

Human and AI partnership

AI might never be entirely error-free, but for the time being, human intervention can catch mistakes. Although there may be some concern in the construction sector that AI will replace humans, there are elements to any construction project that only people can do.

AI lacks the critical thinking and problem-solving that humans excel at, so additional training for engineers to supervise and maintain the automated systems is key so that each side can work together optimally. Skilled workers have creativity and intuition, as well as customer service expertise, while AI is not yet capable of such novel solutions.

With the engineers implementing appropriate guardrails and frameworks, AI can contribute the bulk of automation and repetition to projects, thereby creating a symbiotic and optimal relationship between humans and machines.

“Engineers have been designing impressive buildings for decades already, where they are not doing all the design manually. You need to make sure that those structures are validated first by engineering principles, physical rules, local codes, and the rest. So we have all the tools to be able to validate those designs,” explains Moutte.

As AI advances alongside human care and control, it can help futureproof the construction process where every step is bolstered by the strengths of both sides. By addressing the concerns of the construction industry – costs, sustainability, waste and task repetition – and upskilling engineers to manage AI to address these at the design and implementation stage, the construction sector looks set to be less riddled with potholes.

“We’ve already seen how AI can be used to create new materials and reduce waste,” explains Moutte. “As we move to 2050, I believe engineers will need those AI capabilities to create the best possible designs and I’m looking forward to releasing some of those AI-enabled features in our products.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Transforming software with generative AI

Generative AI’s promises for the software development lifecycle (SDLC)—code that writes itself, fully automated test generation, and developers who spend more time innovating than debugging—are as alluring as they are ambitious. Some bullish industry forecasts project a 30% productivity boost from AI developer tools, which, if realized, could inject more than $1.5 trillion into the global GDP.

But while there’s little doubt that software development is undergoing a profound transformation, separating the hype and speculation from the realities of implementation and ROI is no simple task. As with previous technological revolutions, the dividends won’t be instant. “There’s an equivalency between what’s going on with AI and when digital transformation first happened,” observes Carolina Dolan Chandler, chief digital officer at Globant. “AI is an integral shift. It’s going to affect every single job role in every single way. But it’s going to be a long-term process.”

Where exactly are we on this transformative journey? How are enterprises navigating this new terrain—and what’s still ahead? To investigate how generative AI is impacting the SDLC, MIT Technology Review Insights surveyed more than 300 business leaders about how they’re using the technology in their software and product lifecycles.

The findings reveal that generative AI has rich potential to revolutionize software development, but that many enterprises are still in the early stages of realizing its full impact. While adoption is widespread and accelerating, there are significant untapped opportunities. This report explores the projected course of these advancements, as well as how emerging innovations, including agentic AI, might bring about some of the technology’s loftier promises.

Key findings include the following:

Substantial gains from generative AI in the SDLC still lie ahead. Only 12% of surveyed business leaders say that the technology has “fundamentally” changed how they develop software today. Future gains, however, are widely anticipated: Thirty-eight percent of respondents believe generative AI will “substantially” change the SDLC across most organizations in one to three years, and another 31% say this will happen in four to 10 years.

Use of generative AI in the SDLC is nearly universal, but adoption is not comprehensive. A full 94% of respondents say they’re using generative AI for software development in some capacity. One-fifth (20%) describe generative AI as an “established, well-integrated part” of their SDLC, and one-third (33%) report it’s “widely used” in at least part of their SDLC. Nearly one-third (29%), however, are still “conducting small pilots” or adopting the technology on an individual-employee basis (rather than via a team-wide integration).

Generative AI is not just for code generation. Writing software may be the most obvious use case, but most respondents (82%) report using generative AI in at least two phases of the SDLC, and one-quarter (26%) say they are using it across four or more. The most common additional use cases include designing and prototyping new features, streamlining requirement development, fast-tracking testing, improving bug detection, and
boosting overall code quality.

Generative AI is already meeting or exceeding expectations in the SDLC. Even with this room to grow in how fully they integrate generative AI into their software development workflows, 46% of survey respondents say generative AI is already meeting expectations, and 33% say it “exceeds” or “greatly exceeds” expectations.

AI agents represent the next frontier. Looking to the future, almost half (49%) of leaders believe advanced AI tools, such as assistants and agents, will lead to efficiency gains or cost savings. Another 20% believe such tools will lead to improved throughput or faster time to market.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Cloud transformation clears businesses for digital takeoff

In an age where customer experience can make or break a business, Cathay Pacific is embracing cloud transformation to enhance service delivery and revolutionize operations from the inside out. It’s not just technology companies that are facing pressure to deliver better customer service, do more with data, and improve agility. An almost 80-year-old airline, Cathay Pacific embarked on its digital transformation journey in 2014, spurred by a critical IT disruption that became the catalyst for revamping their technology.

By embracing the cloud, the airline has not only streamlined operations but also paved the way for innovative solutions like DevSecOps and AI integration. This shift has enabled Cathay to deliver faster, more reliable services to both passengers and staff, while maintaining a robust security framework in an increasingly digital world. 

According to Rajeev Nair, general manager of IT infrastructure and security at Cathay Pacific, becoming a digital-first airline was met with early resistance from both business and technical teams. The early stages required a lot of heavy lifting as they shifted legacy apps first from their server room to a dedicated data center and then to the cloud. From there began the process of modernization that Cathay Pacific, now in its final stages of this transformation, continues to fine tune.

The cloud migration also helped Cathay align with their ESG goals. “Two years ago, if you asked me what IT could do for sustainability, we would’ve been clueless,” says Nair. However, through cloud-first strategies and green IT practices, the airline has made notable strides in reducing its carbon footprint. Currently, the business is in the process of moving to a smaller data center, reducing physical infrastructure and its carbon emissions significantly by 2025.

The broader benefits of this cloud transformation for Cathay Pacific go beyond sustainability. Agility, time-to-market, and operational efficiency have improved drastically. “If you ask many of the enterprises, they would probably say that shifting to the cloud is all about cost-saving,” says Nair. “But for me, those are secondary aspects and the key is about how to enable the business to be more agile and nimble so that the business capability could be delivered much faster by IT and the technology team.”

By 2025, Cathay Pacific aims to have 100% of their business applications running on the cloud, significantly enhancing their agility, customer service, and cost efficiency, says Nair.

As Cathay Pacific continues its digital evolution, Nair remains focused on future-proofing the airline through emerging technologies. Looking ahead, he is particularly excited about the potential of AI, generative AI, and virtual reality to further enhance both customer experience and internal operations. From more immersive VR-based training for cabin crew to enabling passengers to preview in-flight products before boarding, these innovations are set to redefine how the airline engages with its customers and staff. 

“We have been exploring that for quite some time, but we believe that it will continue to be a mainstream technology that can change the way we serve the customer,” says Nair.

This episode of Business Lab is produced in association with Infosys Cobalt.

Full Transcript 

Megan Tatum: From MIT Technology Review, I’m Megan Tatum, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. 

Our topic today is cloud transformation to meet business goals and customer needs. It’s not just tech companies that have to stay one step ahead. Airlines too are under pressure to deliver better customer service, do more with data, and improve agility. 

Two words for you: going further. 

My guest is Rajeev Nair, who is the general manager of IT infrastructure and security at Cathay Pacific. This podcast is produced in association with Infosys Cobalt. Welcome, Rajeev. 

Rajeev Nair: Thank you. Thank you, Megan. Thank you for having me. 

Megan: Thank you ever so much for joining us. Now to get some context for our conversation today, could you first describe how Cathay Pacific’s digital transformation journey began, and explain, I guess, what stage of this transformation this almost 80-year-old airline is currently in, too? 

Rajeev: Sure, definitely Megan. So for Cathay, we started this transformation journey probably a decade back, way back in 2014. It all started with facing some major service disruption within Cathay IT where it had a massive impact on the business operation. That prompted us to trigger and initiate this transformation journey. So the first thing is we started looking at many of our legacy applications. Back in those days we still had mainframe systems that provided so many of our critical services. We started looking at migrating those legacy apps first, moving them outside of that legacy software and moving them into a proper data center. Back in those days, our data center used to be our corporate headquarters. We didn’t have a dedicated data center and it used to be in a server room. So those were the initial stages of our transformation journey, just a basic building block. So we started moving into a proper data center so that resilience and availability could be improved. 

And as a second phase, we started looking at the cloud. Those days, cloud was just about to kick off in this part of the world. We started looking at migrating to the cloud and it has been a huge challenge or resistance even from the business as well as from the technology team. Once we started moving, shifting apps to the cloud, we had multiple transformation programs to do that modernization activities. Once that is done, then the third phase of the journey is more about your network. Once your applications are moved to the cloud, your network design needs to be completely changed. Then we started looking at how we could modernize our network because Cathay operates in about 180 regions across the world. So our network is very crucial for us. We started looking at redesigning our network. 

And then, it comes to your security aspects. Things moving to the cloud, your network design is getting changed, your cybersecurity needs heavy lifting to accommodate the modern world. We started focusing on cybersecurity initiatives where our security posture has been improved a lot over the last few years. And with those basic building blocks done on the hardware and on the technology side, then comes your IT operations. Because one is your hardware and software piece, but how do you sustain your processes to ensure that it can support those changing technology landscapes? We started investing a lot around the IT operations side, but things like ITIL processes have been revisited. We started adopting many of the DevOps and the DevSecOps practices. So a lot of emphasis around processes and practices to help the team move forward, right? 

And those operations initiatives are in phase. As we stand today, we are at the final stage of our cloud journey where we are looking at how we can optimize it better. So we shifted things to the cloud and that has been a heavy lifting that has been done in the early phases. Now we are focusing around how we can rewrite or refactor your application so that it can better liberate your cloud technologies where we could optimize the performance, thereby optimizing your usage and the cloud resources wherein you could save on the cost as well as on the sustainability aspect. That is where we stand. By 2025, we are looking at moving 100% of our business applications to the cloud and also reducing our physical footprint in our data centers as well. 

Megan: Fantastic. And you mentioned sustainability there. I wonder how does the focus on environmental, social, and governance goals or ESG tie into your wider technology strategy? 

Rajeev: Sure. And to be very honest, Megan, if you asked me this question two years back, we would’ve been clueless on what IT could do from a sustainability aspect. But over the last two years, there has been a lot of focus around ESG components within the technology space where we have done a lot of initiatives since last year to improve and be efficient on the sustainability front. So a couple of key areas that we have done. One is definitely the cloud-first strategy where adopting the cloud-first policy reduces your carbon footprint and it also helps us in migrating away from our data center. So as we speak, we are doing a major project to further reduce our data center size by relocating to a much smaller data center, which will be completed by the end of next year. That will definitely help us to reduce our footprint. 

The second is around adopting the various green IT practices, things like energy efficient devices, be it your PCs or the laptop or virtualizations, and e-based management policies and management aspects. Some of the things are very basic and fundamental in nature. Stuff like we moved away from a dual monitor to a single monitor wherein we could reduce your energy consumption by half, or changing some of your software policies like screen timeouts and putting a monitor in standby. Those kinds of basic things really helped us to optimize and manage. And the last one is around FinOps. So FinOps is a process in the practice that is being heavily adopted in the cloud organization, but it is just not about optimizing your course because by adopting the FinOps practices and tying in with the GreenOps processes, we are able to focus a lot around reducing our CO2 footprint and optimizing sustainability. Those are some of the practices that we have been doing with Cathay. 

Megan: Yeah. fantastic benefits from relatively small changes there. Other than ESG, what are the other benefits for an enterprise like Cathay Pacific in terms of shifting from those legacy systems to the cloud that you found? 

Rajeev: For me, the key is about agility and time-to-market capability. If you ask many of the enterprises, they would probably say that shifting to the cloud is all about cost-saving. But for me, those are secondary aspects. The key is about how to enable the business to be more agile and nimble so that the business capability can be delivered much faster by IT and the technology team. So as an example, gone are the days when we take about a few months before we provision hardware and have the platform and the applications ready. Now the platforms are being delivered to the developers within an hour’s time so that the developers can quickly build their development environment and be ready for development and testing activities. Right? So agility is a key and the number one factor. 

The second is by shifting to the cloud, you’re also liberating many of the latest technologies that the cloud comes up with and the provider has to offer. Things like capacity and the ability to scale up and down your resources and services according to your business needs and fluctuations are a huge help from a technology aspect. That way you can deliver customer-centered solutions faster and more efficiently than many of our airline customers and competitors. 

And the last one is, of course, your cost saving aspect and the operational efficiency. By moving away from the legacy systems, we can reduce a lot of capex [capital expenditure]. Like, say for example, I don’t need to spend money on investing in hardware and spend resources to manage those hardware and data center operations, especially in Hong Kong where human resources are pretty expensive and scarce to find. It is very important that I rely on these sorts of technologies to manage those optimally. Those are some of the key aspects that we see from a cloud adoption perspective. 

Megan: Fantastic. And it sounds like it’s been a several year process so far. So after what sounds like pretty heavy investment when it comes to moving legacy hardware on-prem systems to the cloud. What’s your approach now to adapting your IT operations off the back of that? 

Rajeev: Exactly. That is, sort of, just based early in my transformation journey, but yeah, absolutely. By moving to the cloud, it is just not about the hardware, but it’s also about how your operations and your processes align with this changing technology and new capabilities. And, for example, by adopting more agile and scalable approach to managing IT infrastructures and applications as well. Also leveraging the data and insights that the cloud enables. To achieve this, the fundamental aspect of this is how you can revisit and fine tune your IT service management processes, and that is where your core of IT operations have been built in the past. And to manage that properly we recently, I think, over the last three years we were looking at implementing a new IT service management solution, which is built on a product called ServiceNow. So they are built on the core ITIL processes framework to help us manage the service management, the operations management, and asset management. 

Those are some of the capabilities which we rolled out with the help of our partners like Infosys so that it could provide a framework to fine tune and optimize IT processes. And we also adopted things like DevOps and DevSecOps because what we have also noticed is the processes like ITIL, which was very heavy over the last few years around support activities is also shifting. So we wanted to adopt some of these development practices into the support and operations functions to be more agile by shifting left some of these capabilities. And in this journey, Infosys has been our key partner, not only on the cloud transformation side, but also on implementation of ServiceNow, which is our key service management tool where they provided us end-to-end support starting from the planning phase or the initial conceptual phase and also into the design and development and also to the deployment and maintenance. We haven’t completed this journey and it’s still a project that is currently ongoing, and by 2025 we should be able to complete this successfully across the enterprise. 

Megan: Fascinating. It’s an awful lot of change going on. I mean, there must be an internal shift, therefore, that comes with cloud transformation too, I imagine. I wonder, what’s your approach been to up skilling your team to help it excel in this new way of working? 

Rajeev: Yeah, absolutely. And that is always the hardest part. You can change your technology and processes is but changing your people, that’s always toughest and the hardest bit. And essentially this is all about change management, and that has been one of our struggles in our early part of the cloud transformation journey. What we did is we invested a lot in terms of uplifting our traditional infrastructure team. All the traditional technology teams have to go through that learning curve in adopting cloud technology early in our project. And we also provided a lot of training programs, including some of our cloud partners were able to up skill and train these resources. 

But the key differences that we are seeing is even after providing all those training and upskilling programs, we could see that there was a lot of resistance and a lot of doubts in people’s mind about how cloud is going to help the organization. And the best part is what we did is we included these team members into our project so that they get the hands-on experience. And once they start seeing the benefits around these technologies, there was no looking back. And the team was able to completely embrace the cloud technologies to the point that we still have a traditional technology team who’s supporting the remaining hardware and the servers of the world, but they’re also very keen to shift across the line and adopt and embrace the cloud technology. But it’s been quite a journey for us. 

Megan: That’s great to hear that you’ve managed to bring them along with you. And I suppose it’d be remiss of me if we’re talking about embracing new technologies not to talk about AI, although still in its early stages in most industries. I wonder how is Cathay Pacific approaching AI adoption as well? 

Rajeev: Sure. I think these days none of these conversations can be complete without talking about AI and gen AI. We started this early exploratory phase early into the game, especially in this part of the world. But for us, the key is approaching this based on the customer’s pain points and business needs and then we work backward to identify what type of AI is best suitable or relevant to us. In Cathay, currently, we focus on three main types of AI. One is of course conversational AI. Essentially, it is a form of an internal and external chatbot. Our chatbot, we call it Vera, serves customers directly and can handle about 50% of the inquiries successfully. And just about two weeks back, we upgraded the LLM with a new model, the chatbot with a new model, which is able to be more efficient and much more responsive in terms of the human work. So that’s one part of the AI that we heavily invested on. 

Second is RPA, or robotic process automation, especially what you’re seeing is during the pandemic and post-Covid era, there is limited resources available, especially in Hong Kong and across our supply chain. So RPA or the robotic processes helps to automate mundane repetitive tasks, which doesn’t only fill the resource gap, but it also directly enhances the employee experience. And so far in Cathay, we have about a hundred bots in production serving various business units, serving approximately 30,000 hours every year of human activity. So that’s the second part. 

The third one is around ML and it’s the gen AI. So like our digital team or the data science team has developed about 70-plus ML models in Cathay that turned the organization data into insights or actionable items. These models help us to make a better decision. For example, what meals to be loaded into the aircraft and specific routes, in terms of what quantity and what kind of product offers we promote to customers, and including the fare loading and the pricing of our passenger as well as a cargo bay space. There is a lot of exploration that is being done in this space as well. And a couple of examples I could relate is if you ever happen to come to Hong Kong, next time at the airport, you could hear the public announcement system and that is also AI-powered recently. In the past, our staff used to manually make those announcements and now it has been moved away and has been moved into AI-powered voice technology so that we could be consistent in our announcement. 

Megan: Oh, fantastic. I’ll have to listen for it next time I’m at Hong Kong airport. And you’ve mentioned this topic a couple of times in the conversation. Look, when we’re talking about cloud modernization, cybersecurity can be a roadblock to agility, I guess, if it’s not managed effectively. So could you also tell us in a little more detail how Cathay Pacific has integrated security into its digital transformation journey, particularly with the adoption of development security operations practices that you’ve mentioned? 

Rajeev: Yeah, this is an interesting one. I look after cybersecurity as well as the infrastructure services. With both of these critical functions around my hand, I need to be mindful of both aspects, right? Yes, it’s an interesting one and it has changed over the period of time, and I fully understand why cybersecurity practices needs to be rigid because there is a lot of compliance and it is a highly regulated function, but if something goes wrong, as a CISO we are held accountable for those faults. I can understand why the team is so rigid in their practices. And I also understand from a business perspective it could be perceived as a road blocker to agility. 

One of the key aspects that we have done in Cathay is we have been following DevOps for quite a number of years, and recently, I think in the last two years, we started implementing DevSecOps into our STLC [software testing life cycle]. And what it essentially means is rather than the core cybersecurity team being responsible for many of the security testing and those sorts of aspects, we want to shift left some of these capabilities into the developers so that the people who develop the code now are held accountable for the testing and the quality of the output. And they’re also enabled in terms of the cybersecurity process. Right? 

Of course, when we started off this journey, there has been a huge resistance on the security team itself because they don’t really trust the developers trying to do the testing or the testing outputs. But over a period of time with the introduction of various tools and automation that is put in place, this is now getting into a matured stage wherein it is now enabling the upfront teams to take care of all the aspects of security, like threat modeling, code scanning, and the vulnerability testing. But at the end, the security teams would be still validating and act as a sort of a gatekeeper, but in a very light and inbuilt processes. And this way we can ensure that our cloud applications are secure by design and by default they can deliver them faster and more reliably to our customers. And in this entire process, right? 

In the past, security has been always perceived as an accountability of the cybersecurity team. And by enabling the developers of the security aspects, now you have a better ownership in the organization when it comes to cybersecurity and it is building a better cybersecurity culture within the organization. And that, to me, is a key because from a security aspect, we always say that people are your first line of defense and often they’re also the last line of defense. I’m glad that by these processes we are able to improve that maturity in the organization. 

Megan: Absolutely. And you mentioned that obviously cybersecurity is something that’s really important to a lot of customers nowadays as well. I wondered if you could offer some other examples too of how your digital transformation has improved that customer experience in other ways? 

Rajeev: Yeah, definitely. Maybe I can quote a few examples, Megan. One is around our pilots. You would’ve seen when you travel through the airport or in the aircraft that pilots usually carry a briefcase when they load the flight, and you are often probably wondering what exactly they carry. Basically, that contains a bunch of papers. It contains your weather charts, your navigation routes, and the flight plans, the crew details. It’s a whole stack of paper that they have to carry on each and every flight. And in Cathay, by digitization, we have automated that in their processes, where now they carry an iPad instead of a bunch of papers or briefing pack. So that iPad includes all these softwares that is required for the captain to operate the flight in a legally and a safe manner. 

Paperless cockpit operation is nothing new. Many airlines have attempted to do that, but I should say that Cathay has been on the forefront in truly establishing a paperless operation, where many of the other airlines have shown great interest in using our software. That is one aspect from a fly crew perspective. Second, from a customer perspective, we have an app called Customer 360, which is a completely in-house developed model, which has all the customer direct transactions, surveys, or how they interact at the various checkpoints with our crew or at the boarding. You have all this data feed of a particular customer where our agents or the cabin crew can understand the customer’s sentiment and their reaction to service recovery action. 

Say for example, the customer calls up a call center and ask for a refund or miles compensation. Based on the historical usage, we could prioritize the best action to improve the customer satisfaction. We are connected to all these models and enable the frontline teams so that they can use this when they engage with the customer. An example at the airport, our agents will be able to see a lot of useful insights about the customers beyond the basic information like the flight itinerary or the online shopping history at the Cathay shop, et cetera, so that they can see the overall satisfaction level and get additional insights on recommended actions to restore or improve the customer satisfaction level. This is basically used by our frontline agents at the airport, our cabin crew as well as all the airport team, and the customer team so that they have great consistency in the service no matter what touchpoint the customers are choosing to contact us. 

Megan: Fantastic. 

Rajeev: So these are a few example looking from a back end as well as from a front line of the team perspective. 

Megan: Yeah, absolutely. I’m sure there’s a few people listening who were wondering what pilots carry in that suitcase. So thank you so much for clearing that up. And finally, Rajeev, I guess looking ahead, what emerging technologies are you excited to explore further going forward to enhance digital capabilities and customer experience in the years to come? 

Rajeev: Yeah, so we will continue to explore AI and gen AI capability, which has been the spotlight for the last 18 months or so, be it for the passenger or even for the staff internally. We will continue to explore that. But apart from AI, one other aspect I believe could go at great ways around the AR and the VR capabilities, basically virtual reality. We have been exploring that for quite some time, but we believe that it will continue to be a mainstream technology that can change the way we serve the customer. Say for example, in Cathay, we already have a VR cave for our cabin crew training, virtual reality capabilities, and in a few months’ time, we are actually launching a learning facility based on VR where we could be able to provide more immersive learning experience for the cabin crew and later for the other employees. 

Basically, before a cabin crew is able to operate a flight, they go through a rigorous training in Cathay City in our headquarters, basically to know how to serve our passengers, how to handle an emergency situation, those sorts of aspects. And in many cases, we travel the crew from various outports or various countries back into Hong Kong to train them and equip them for these training activities. You can imagine that costs us a lot of money and effort to bring all the people back to Hong Kong. And by having VR capabilities, we are able to do that anywhere in the world without having that physical presence. That’s one area where it’ll go mainstream. 

The second is around other business units. Apart from the cabin crew, we are also experimenting the VR on the customer front. For example, we are able to launch a new business class seat product we call the Aria Suite by next year. And VR technology will help the customers to visualize the seat details without them able to get on board. So without them flying, even before that, they’re able to experience a product on the ground. At our physical shop in Hong Kong, customers can now use a virtual reality technology to visualize how our designer furniture and lifestyle products fit in the sitting rooms. The list of VR capabilities goes very long. The list goes on. And this is also a great and important way to engage with our customers in particular. 

Megan: Wow. Sounds like some exciting stuff on the way. Thank you ever so much, Rajeev, for talking us through that. That was Rajeev Nair, the general manager of IT infrastructure and security at Cathay Pacific, who I spoke with from an unexpectedly sunny Brighton, England.

That’s it for this episode of Business Lab. I’m your host, Megan Tatum. I’m a contributing editor and host for Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print, on the web, and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com. 

This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review, this episode was produced by Giro Studios. Thanks for listening. 

Data strategies for AI leaders

Organizations are starting the heavy lifting to get real business value from generative AI. As Arnab Chakraborty, chief responsible AI officer at Accenture, puts it, “2023 was the year when clients were amazed with generative AI and the possibilities. In 2024, we are starting to see scaled implementations of responsible generative AI programs.”

Some generative AI efforts remain modest. As Neil Ward-Dutton, vice president for automation, analytics, and AI at IDC Europe, describes it, this is “a classic kind of automation: making teams or individuals more productive, getting rid of drudgery, and allowing people to deliver better results more quickly.” Most companies, though, have much greater ambitions for generative AI: they are looking to reshape how they operate and what they sell.

Great expectations for generative AI

The expectation that generative AI could fundamentally upend business models and product offerings is driven by the technology’s power to unlock vast amounts of data that were previously inaccessible. “Eighty to 90% of the world’s data is unstructured,” says Baris Gultekin, head of AI at AI data cloud company Snowflake. “But what’s exciting is that AI is opening the door for organizations to gain insights from this data that they simply couldn’t before.”

In a poll conducted by MIT Technology Review Insights, global executives were asked about the value they hoped to derive from generative AI. Many say they are prioritizing the technology’s ability to increase efficiency and productivity (72%), increase market competitiveness (55%), and drive better products and services (47%). Few see the technology primarily as a driver of increased revenue (30%) or reduced costs (24%), which is suggestive of executives’ loftier ambitions. Respondents’ top ambitions for generative AI seem to work hand in hand. More than half of companies say new routes toward market competitiveness are one of their top three goals, and the two likely paths they might take to achieve this are increased efficiency and better products or services.

For companies rolling out generative AI, these are not necessarily distinct choices. Chakraborty sees a “thin line between efficiency and innovation” in current activity. “We are starting to notice companies applying generative AI agents for employees, and the use case is internal,” he says, but the time saved on mundane tasks allows personnel to focus on customer service or more creative activities. Gultekin agrees. “We’re seeing innovation with customers building internal generative AI products that unlock a lot of value,” he says. “They’re being built for productivity gains and efficiencies.”

Chakraborty cites marketing campaigns as an example: “The whole supply chain of creative input is getting re-imagined using the power of generative AI. That is obviously going to create new levels of efficiency, but at the same time probably create innovation in the way you bring new product ideas into the market.” Similarly, Gultekin reports that a global technology conglomerate and Snowflake customer has used AI to make “700,000 pages of research available to their team so that they can ask questions and then increase the pace of their own innovation.”

The impact of generative AI on chatbots—in Gultekin’s words, “the bread and butter of the recent AI cycle”—may be the best example. The rapid expansion in chatbot capabilities using AI borders between the improvement of an existing tool and creation of a new one. It is unsurprising, then, that 44% of respondents see improved customer satisfaction as a way that generative AI will bring value.

A closer look at our survey results reflects this overlap between productivity enhancement and product or service innovation. Nearly one-third of respondents (30%) included both increased productivity and innovation in the top three types of value they hope to achieve with generative AI. The first, in many cases, will serve as the main route to the other.

But efficiency gains are not the only path to product or service innovation. Some companies, Chakraborty says, are “making big bets” on wholesale innovation with generative AI. He cites pharmaceutical companies as an example. They, he says, are asking fundamental questions about the technology’s power: “How can I use generative AI to create new treatment pathways or to reimagine my clinical trials process? Can I accelerate the drug discovery time frame from 10 years to five years to one?”

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Preventing Climate Change: A Team Sport

9 October 2024 at 15:22

This sponsored session was presented by MEDC at MIT Technology Review’s 2024 EmTech MIT event.

Michigan is at the forefront of the clean energy transition, setting an example in mobility and automotive innovation. Other states and organizations can learn from Michigan’s approach to public-private partnerships, actionable climate plans, and business-government alignment. Progressive climate policies are not only crucial for sustainability but also for attracting talent in today’s competitive job market.

Read more from MIT Technology Review Insights & MEDC about addressing climate change impacts


About the speaker

Hilary Doe, Chief Growth & Marketing Officer, Michigan Economic Development Corporation

As Chief Growth & Marketing Officer, Hilary Doe leads the state’s efforts to grow Michigan’s population, economy, and reputation as the best place to live, work, raise a family, and start a business. Hilary works alongside the Growing Michigan Together Council on a once-in-a-generation effort to grow Michigan’s population, boost economic growth, and make Michigan the place everyone wants to call home.

Hilary is a dynamic leader in nonprofits, technology, strategy, and public policy. She served as the national director at the Roosevelt Network, where she built and led an organization engaging thousands of young people in civic engagement and social change programming at chapters nationwide, which ultimately earned the organization recognition as a recipient of the MacArthur Award for Creative and Effective Institutions. She also served as Vice President of the Roosevelt Institute, where she oversaw strategy and expanded the Institute’s Four Freedoms Center, with the goal of empowering communities and reducing inequality alongside the greatest economists of our generations. Most recently, she served as President and Chief Strategy Officer at Nationbuilder, working to equip the world’s leaders with software to grow their movements, businesses, and organizations, while spreading democracy.

Hilary is a graduate of the University of Michigan’s Honors College and Ford School of Public Policy, a Detroit resident, and proud Michigander.

Productivity Electrified: Tech That Is Supercharging Business

9 October 2024 at 15:21

This sponsored session was presented by Ford Pro at MIT Technology Review’s 2024 EmTech MIT event.

A decarbonized transportation system is a necessary pre-requisite for a sustainable economy. In the transportation industry, the road to electrification and greater technology adoption can also increase business bottom lines and reduce downstream costs to tax payers. Focusing on early adopters such as first responders, local municipalities, and small business owners, we’ll discuss common misconceptions, barriers to adoption, implementation strategies, and how these insights carry over into wide-spread adoption of emerging technology and electric vehicles.


About the speaker

Wanda Young, Global Chief Marketing & Experience Officer, Ford Pro

Wanda Young is a visionary brand marketer and digital transformation expert who thrives at the intersection of brand, digital, technology, and data; paired with a deep understanding of the consumer mindset. She gained her experience working for the largest brands in retail, sports & entertainment, consumer products, and electronics. She is a successful brand marketer and change agent that organizations seek to drive digital and data transformation – a Chief Experience Officer years before the title was invented. In her roles managing multiple notable brands, including Samsung, Disney, ESPN, Walmart, Alltel, and Acxiom, she developed knowledge of the interconnectedness of brand, digital, and data; of the importance of customer experience across all touchpoints; the power of data and localization; and the in-the-trenches accountability to drive outcomes. Now at Ford Pro, the Commercial Division of Ford Motor Company, she is focused on helping grow the newly-launched division and brand which only Ford can offer commercial customers – an integrated lineup of vehicles and services designed to meet the needs of all businesses to keep their productivity on pace to drive growth.

Young enjoyed a series of firsts in her career, including launching ESPN+, developing Walmart’s first social media presence and building 5000 of their local Facebook pages (which are still live today and continue to scale), developing the first weather-triggered ad product with The Weather Company, designing an ad product with Google called Local Inventory Ads, being part of team who took Alltel Wireless private (which later sold to Verizon Wireless), launching the Acxiom.com website on her first Mother’s Day with her daughter on her lap. She serves on the board of or is involved in a number of industry memberships and has been the recipient of many prestigious awards. Young received a Bachelor of Arts in English with a minor in Advertising from the University of Arkansas.

Addressing climate change impacts

The reality of climate change has spurred enormous public and private investment worldwide, funding initiatives to mitigate its effects and to adapt to its impacts. That investment has spawned entire industries and countless new businesses, resulting in the creation of new green jobs and contributions to economic growth. In the United States, this includes the single largest climate-related investment in the country’s history, made in 2022 as part of the Inflation Reduction Act.

For most US businesses, however, the costs imposed by climate change and the future risks it poses will outweigh growth opportunities afforded by the green sector. In a survey of 300 senior US executives conducted by MIT Technology Review, every respondent agrees that climate change is either harming the economy today or will do so in the future. Most expect their organizations to contend with extreme weather, such as severe storms, flooding, and extreme heat, in the near term. Respondents also report their businesses are already incurring costs related to climate change.

This research examines how US businesses view their climate change risk and the steps they are taking to adapt to climate change’s impacts. The results make clear that climate considerations, such as frequency of extreme weather and access to natural resources, are now a prime factor in businesses’ site location decisions. As climate change accelerates, such considerations are certain to grow in importance.

Key findings include the following:

Businesses are weighing relocation due to climate risks. Most executives in the survey (62%) deem their physical infrastructure (some or all of it) exposed to the impacts of climate change, with 20% reporting it is “very exposed.” A full 75% of respondents report their organization has considered relocating due to climate risk, with 6% indicating they have concrete plans to relocate facilities within the next five years due to climate factors. And 24% report they have already relocated physical infrastructure to prepare for climate change impacts.

Companies must lock in the costs of climate change adaptation. Nearly all US businesses have already suffered from the effects of climate change, judging by the survey. Weighing most heavily thus far, and likely in the future, are increases in operational costs (affecting 64%) and insurance premiums (63%), as well as disruption to operations (61%) and damage to infrastructure (55%).

Executives know climate change is here, and many are planning for it. Four-fifths (81%) of survey respondents deem climate planning and preparedness important to their business, and one-third describe it as very important. There is a seeming lag at some companies, however, at translating this perceived importance into actual planning: only 62% have developed a climate change adaptation plan, and 52% have conducted a climate risk assessment.

Climate-planning resources are a key criterion in site location. When judging a potential new business site on its climate mitigation features, 71% of executives highlight the availability of climate-planning resources as among their top criteria. Nearly two-thirds (64%) also cite the importance of a location’s access to critical natural resources.

Though climate change will affect everyone, its risks and impacts vary by region. No US region is immune to climate change: a majority of surveyed businesses in every region have experienced at least some negative climate change impacts. However, respondents believe the risks are lowest in the Midwest, with nearly half of respondents (47%) naming that region as least exposed to climate change risk.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Preparing for the unknown: A guide to future-proofing imaging IT

In an era of unprecedented technological advancement, the health-care industry stands at a crossroad. As health expenditure continues to outpace GDP in many countries, health-care executives grapple with crucial decisions on investment prioritization for digitization, innovation, and digital transformation. The imperative to provide high-quality, patient-centric care in an increasingly digital world has never been more pressing. At the forefront of this transformation is imaging IT—a critical component that’s evolving to meet the challenges of modern health care.

The future of imaging IT is characterized by interconnected systems, advanced analytics, robust data security, AI-driven enhancements, and agile infrastructure. Organizations that embrace these trends will be well-positioned to thrive in the changing health-care landscape. But what exactly does this future look like, and how can health-care providers prepare for it?

Networked care models: The new paradigm

The adoption of networked care models is set to revolutionize health-care delivery. These models foster collaboration among stakeholders, making patient information readily available and leading to more personalized and efficient care. As we move forward, expect to see health-care organizations increasingly investing in technologies that enable seamless data sharing and interoperability.

Imagine a scenario where a patient’s entire medical history, including imaging data from various specialists, is instantly accessible to any authorized health-care provider. This level of connectivity not only improves diagnosis and treatment but also enhances the overall patient experience.

Data integration and analytics: Unlocking insights

True data integration is becoming the norm in health care. Robust integrated image and data management solutions (IDM) are consolidating patient data from diverse sources. But the real game-changer lies in the application of advanced analytics and AI to this treasure trove of information.

By leveraging these technologies, medical professionals can extract meaningful insights from complex data sets, leading to quicker and more accurate diagnoses and treatment decisions. The potential for improving patient outcomes through data-driven decision-making is immense.

A case in point is the implementation of Syngo Carbon Image and Data Management (IDM) at Tirol Kliniken GmbH in Innsbruck, Austria. This solution consolidates all patient-centric data points in one place, including different image and photo formats, DICOM CDs, and digitalized video sources from endoscopy or microscopy. The system digitizes all documents in their raw formats, enabling the distribution of native, actionable data throughout the enterprise.

Data privacy and edge computing: Balancing innovation and security

As health care becomes increasingly data-driven, concerns about data privacy remain paramount. Enter edge computing—a solution that enables the processing of sensitive patient data locally, reducing the risk of data breaches during processing and transmission.

This approach is crucial for health-care facilities aiming to maintain patient trust while adopting advanced technologies. By keeping data processing close to the source, health-care providers can leverage cutting-edge analytics without compromising on security.

Workflow integration and AI: Enhancing efficiency and accuracy

The integration of AI into medical imaging workflows is set to dramatically improve efficiency, accuracy, and the overall quality of patient care. AI-powered solutions are becoming increasingly common, reducing the burden of repetitive tasks and speeding up diagnosis.

From automated image analysis to predictive modeling, AI is transforming every aspect of the imaging workflow. This not only improves operational efficiency but also allows health-care professionals to focus more on patient care and complex cases that require human expertise.

A quantitative analysis at the Medical University of South Carolina demonstrates the impact of AI integration. With the support of deep learning algorithms fully embedded in the clinical workflow, cardiothoracic radiologists exhibited a reduction in chest CT interpretation times of 22.1% compared to workflows without AI support.

Virtualization: The key to agility

To future-proof their IT infrastructure, health-care organizations are turning to virtualization. This approach allows for modularization and flexibility, making it easier to adapt to rapidly evolving technologies such as AI-driven diagnostics.

Container technology is playing a pivotal role in optimizing resource utilization and scalability. By embracing virtualization, health-care providers can ensure their IT systems remain agile and responsive to changing needs.

Standardization and compliance: Ensuring long-term compatibility

As imaging IT systems evolve, adherence to industry standards and compliance requirements remains crucial. These systems need to seamlessly interact with Electronic Health Records (EHRs), medical devices, and other critical systems.

This adherence ensures long-term compatibility and the ability to accommodate emerging technologies. It also facilitates smoother integration of new solutions into existing IT ecosystems, reducing implementation challenges and costs.

Real-world success stories

The benefits of these technologies are not theoretical—they are being realized in health-care organizations around the world. For instance, the virtualization strategy implemented at University Hospital Essen (UME), one of Germany’s largest university hospitals, has dramatically improved the hospital’s ability to manage increasing data volumes and applications. UME’s critical clinical information systems now run on modular and virtualized systems, allowing experts to design and use innovative solutions, including AI tools that automate tasks previously done manually by IT and medical staff.

Similarly, the PANCAIM project leverages edge computing for pancreatic cancer detection. This EU-funded initiative uses Siemens Healthineers’ edge computing approach to develop and validate AI algorithms. At Karolinska Institutet, Sweden, an algorithm was implemented for a real pancreatic cancer case, ensuring sensitive patient data remains within the hospital while advancing AI validation in clinical settings.

Another innovative approach is the concept of a Common Patient Data Model (CPDM). This standardized framework defines how patient data is organized, stored, and exchanged across different health-care systems and platforms, addressing interoperability challenges in the current health-care landscape.

The road ahead: Continuous innovation

As we look to the future, it’s clear that technological advancements in radiology will continue at a rapid pace. To stay competitive and provide the best patient care, health-care organizations must prioritize ongoing innovation and the adoption of new technologies.

This includes not only IT systems but also medical devices and treatment methodologies. The health-care providers who embrace this ethos of continuous improvement will be best positioned to navigate the challenges and opportunities that lie ahead.

In conclusion, the future of imaging IT is bright, promising unprecedented levels of efficiency, accuracy, and patient-centricity. By embracing networked care models, leveraging advanced analytics and AI, prioritizing data security, and maintaining agile IT infrastructure, health-care organizations can ensure they’re prepared for whatever the future may hold.

The journey towards future-proof imaging IT may seem daunting, but it’s a necessary evolution in our quest to provide the best possible health care. As we stand on the brink of this new era, one thing is clear: the future of health care is digital, data-driven, and more connected than ever before.

If you want to learn more, you can find more information from Siemens Healthineers.

Syngo Carbon consists of several products which are (medical) devices in their own right. Some products are under development and not commercially available. Future availability cannot be ensured.

The results by Siemens Healthineers customers described herein are based on results that were achieved in the customer’s unique setting. Since there is no “typical” hospital and many variables exist (e.g., hospital size, case mix, level of IT adoption), it cannot be guaranteed that other customers will achieve the same results.

This content was produced by Siemens Healthineers. It was not written by MIT Technology Review’s editorial staff.

Sydney’s Tech Super-Cluster Propels Australia’s AI Industry Forward



This is a sponsored article brought to you by BESydney.

Australia has experienced a remarkable surge in AI enterprise during the past decade. Significant AI research and commercialization concentrated in Sydney drives the sector’s development nationwide and influences AI trends globally. The city’s cutting-edge AI sector sees academia, business and government converge to foster groundbreaking advancements, positioning Australia as a key player on the international stage.

Sydney – home to half of Australia’s AI companies

Sydney has been pinpointed as one of four urban super-clusters in Australia, featuring the highest number of tech firms and the most substantial research in the country.

The Geography of Australia’s Digital Industries report, commissioned by the National Science Agency, the Commonwealth Scientific and Industrial Research Organisation (CSIRO) and the Tech Council of Australia, found Sydney is home to 119,636 digital professionals and 81 digital technology companies listed on the Australian Stock Exchange with a combined worth of A$52 billion.

AI is infusing all areas of this tech landscape. According to CSIRO, more than 200 active AI companies operate across Greater Sydney, representing almost half of the country’s 544 AI companies.

“Sydney is the capital of AI startups for Australia and this part of Australasia”
—Toby Walsh, UNSW Sydney

With this extensive AI commercialization and collaboration in progress across Sydney, AI startups are flourishing.

“Sydney is the capital of AI startups for Australia and this part of Australasia,” according to Professor Toby Walsh, Scientia Professor of Artificial Intelligence at the Department of Computer Science and Engineering at the University of New South Wales (UNSW Sydney).

He cites robotics, AI in medicine and fintech as three areas where Sydney leads the world in AI innovation.

“As a whole, Australia punches well above its weight in the AI sector,” Professor Walsh says. “We’re easily in the top 10, and by some metrics, we’re in the top five in the world. For a country of just 25 million people, that is quite remarkable.”

Sydney’s universities at the forefront of AI research

A key to Sydney’s success in the sector is the strength of its universities, which are producing outstanding research.

In 2021, the University of Sydney (USYD), the University of New South Wales (UNSW Sydney), and the University of Technology Sydney (UTS) collectively produced more than 1000 peer-reviewed publications in artificial intelligence, contributing significantly to the field’s development.

According to CSIRO, Australia’s research and development sector has higher rates of AI adoption than global averages, with Sydney presenting the highest AI publishing intensity among Australian universities and research institutes.

Professor Aaron Quigley, Science Director and Deputy Director of CSIRO’s Data61 and Head of School in Computer Science and Engineering at UNSW Sydney, says Sydney’s AI prowess is supported by a robust educational pipeline that supplies skilled graduates to a wide range of industries that are rapidly adopting AI technologies.

“Sydney’s AI sector is backed up by the fact that you have such a large educational environment with universities like UTS, USYD and UNSW Sydney,” he says. “They rank in the top five of AI locations in Australia.”

UNSW Sydney is a heavy hitter, with more than 300 researchers applying AI across various critical fields such as hydrogen fuel catalysis, coastal monitoring, safe mining, medical diagnostics, epidemiology and stress management.

A photo of a smiling man next to a device.  UNSW Sydney has more than 300 researchers applying AI across various critical fields such as hydrogen fuel catalysis, coastal monitoring, safe mining, medical diagnostics, epidemiology, and stress management.UNSW

UNSW Sydney’s AI Institute also has the largest concentration of academics working in AI in the country, adds Professor Walsh.

“One of the main reasons the AI Institute exists at UNSW Sydney is to be a front door to industry and government, to help translate the technology out of the laboratory and into practice,” he says.

Likewise, the Sydney Artificial Intelligence Centre at the University of Sydney, the Australian Artificial Intelligence Institute at UTS, and Macquarie University’s Centre for Applied Artificial Intelligence are producing world-leading research in collaboration with industry.

Alongside the universities, the Australian Government’s National AI Centre in Sydney, aims to support and accelerate Australia’s AI industry.

Synergies in Sydney: where tech titans converge

Sydney’s vortex of tech talent has meant exciting connections and collaborations are happening at lightning speed, allowing simultaneous growth of several high-value industries.

The intersection between quantum computing and AI will come into focus with the April 2024 announcement of a new Australian Centre for Quantum Growth at the University of Sydney. This centre will aim to build strategic and lasting relationships that drive innovation to increase the nation’s competitiveness within the field. Funded under the Australian Government’s National Quantum Strategy, it aims to promote the industry and enhance Australia’s global standing.

“There’s nowhere else in the world that you’re going to get a quantum company, a games company, and a cybersecurity company in such close proximity across this super-cluster arc located in Sydney”
—Aaron Quigley, UNSW Sydney

“There’s a huge amount of experience in the quantum space in Sydney,” says Professor Quigley. “Then you have a large number of companies and researchers working in cybersecurity, so you have the cybersecurity-AI nexus as well. Then you’ve got a large number of media companies and gaming companies in Sydney, so you’ve got the interconnection between gaming and creative technologies and AI.”

“So it’s a confluence of different industry spaces, and if you come here, you can tap into these different specialisms,” he adds “There’s nowhere else in the world that you’re going to get a quantum company, a games company, and a cybersecurity company in such close proximity across this super-cluster arc located in Sydney.”

A global hub for AI innovation and collaboration

In addition to its research and industry achievements in the AI sector, Sydney is also a leading destination for AI conferences and events. The annual Women in AI Asia Pacific Conference is held in Sydney each year, adding much-needed diversity to the mix.

Additionally, the prestigious International Joint Conference on Artificial Intelligence was held in Sydney in 1991.

Overall, Sydney’s integrated approach to AI development, characterized by strong academic output, supportive government policies, and vibrant commercial activity, firmly establishes it as a leader in the global AI landscape.

To discover more about how Sydney is shaping the future of AI download the latest eBook on Sydney’s Science & Engineering industry at besydney.com.au

Your Gateway to a Vibrant Career in the Expanding Semiconductor Industry



This sponsored article is brought to you by Purdue University.

The CHIPS America Act was a response to a worsening shortfall in engineers equipped to meet the growing demand for advanced electronic devices. That need persists. In its 2023 policy report, Chipping Away: Assessing and Addressing the Labor Market Gap Facing the U.S. Semiconductor Industry, the Semiconductor Industry Association forecast a demand for 69,000 microelectronic and semiconductor engineers between 2023 and 2030—including 28,900 new positions created by industry expansion and 40,100 openings to replace engineers who retire or leave the field.

This number does not include another 34,500 computer scientists (13,200 new jobs, 21,300 replacements), nor does it count jobs in other industries that require advanced or custom-designed semiconductors for controls, automation, communication, product design, and the emerging systems-of-systems technology ecosystem.

Purdue University is taking charge, leading semiconductor technology and workforce development in the U.S. As early as Spring 2022, Purdue University became the first top engineering school to offer an online Master’s Degree in Microelectronics and Semiconductors.

U.S. News & World Report has ranked the university’s graduate engineering program among America’s 10 best every year since 2012 (and among the top 4 since 2022)

“The degree was developed as part of Purdue’s overall semiconductor degrees program,” says Purdue Prof. Vijay Raghunathan, one of the architects of the semiconductor program. “It was what I would describe as the nation’s most ambitious semiconductor workforce development effort.”

A person dressed in a dark suit with a white shirt and red tie poses for a professional portrait against a dark background. Prof. Vijay Raghunathan, one of the architects of the online Master’s Degree in Microelectronics and Semiconductors at Purdue.Purdue University

Purdue built and announced its bold high-technology online program while the U.S. Congress was still debating the $53 billion “Creating Helpful Incentives to Produce Semiconductors for America Act” (CHIPS America Act), which would be passed in July 2022 and signed into law in August.

Today, the online Master’s in Microelectronics and Semiconductors is well underway. Students learn leading-edge equipment and software and prepare to meet the challenges they will face in a rejuvenated, and critical, U.S. semiconductor industry.

Is the drive for semiconductor education succeeding?

“I think we have conclusively established that the answer is a resounding ‘Yes,’” says Raghunathan. Like understanding big data, or being able to program, “the ability to understand how semiconductors and semiconductor-based systems work, even at a rudimentary level, is something that everybody should know. Virtually any product you design or make is going to have chips inside it. You need to understand how they work, what the significance is, and what the risks are.”

Earning a Master’s in Microelectronics and Semiconductors

Students pursuing the Master’s Degree in Microelectronics and Semiconductors will take courses in circuit design, devices and engineering, systems design, and supply chain management offered by several schools in the university, such as Purdue’s Mitch Daniels School of Business, the Purdue Polytechnic Institute, the Elmore Family School of Electrical and Computer Engineering, and the School of Materials Engineering, among others.

Professionals can also take one-credit-hour courses, which are intended to help students build “breadth at the edges,” a notion that grew out of feedback from employers: Tomorrow’s engineering leaders will need broad knowledge to connect with other specialties in the increasingly interdisciplinary world of artificial intelligence, robotics, and the Internet of Things.

“This was something that we embarked on as an experiment 5 or 6 years ago,” says Raghunathan of the one-credit courses. “I think, in hindsight, that it’s turned out spectacularly.”

A researcher wearing a white lab coat, hairnet, and gloves works with scientific equipment, with a computer monitor displaying a detailed scientific pattern. A researcher adjusts imaging equipment in a lab in Birck Nanotechnology Center, home to Purdue’s advanced research and development on semiconductors and other technology at the atomic scale.Rebecca Robiños/Purdue University

The Semiconductor Engineering Education Leader

Purdue, which opened its first classes in 1874, is today an acknowledged leader in engineering education. U.S. News & World Report has ranked the university’s graduate engineering program among America’s 10 best every year since 2012 (and among the top 4 since 2022). And Purdue’s online graduate engineering program has ranked in the country’s top three since the publication started evaluating online grad programs in 2020. (Purdue has offered distance Master’s degrees since the 1980s. Back then, of course, course lectures were videotaped and mailed to students. With the growth of the web, “distance” became “online,” and the program has swelled.)

Thus, Microelectronics and Semiconductors Master’s Degree candidates can study online or on-campus. Both tracks take the same courses from the same instructors and earn the same degree. There are no footnotes, asterisks, or parentheses on the diploma to denote online or in-person study.

“If you look at our program, it will become clear why Purdue is increasingly considered America’s leading semiconductors university” —Prof. Vijay Raghunathan, Purdue University

Students take classes at their own pace, using an integrated suite of proven online-learning applications for attending lectures, submitting homework, taking tests, and communicating with faculty and one another. Texts may be purchased or downloaded from the school library. And there is frequent use of modeling and analytical tools like Matlab. In addition, Purdue is also the home of national the national design-computing resources nanoHUB.org (with hundreds of modeling, simulation, teaching, and software-development tools) and its offspring, chipshub.org (specializing in tools for chip design and fabrication).

From R&D to Workforce and Economic Development

“If you look at our program, it will become clear why Purdue is increasingly considered America’s leading semiconductors university, because this is such a strategic priority for the entire university, from our President all the way down,” Prof. Raghunathan sums up. “We have a task force that reports directly to the President, a task force focused only on semiconductors and microelectronics. On all aspects—R&D, the innovation pipeline, workforce development, economic development to bring companies to the state. We’re all in as far as chips are concerned.”

Why a Technical Master’s Degree Can Accelerate Your Engineering Career



This sponsored article is brought to you by Purdue University.

Companies large and small are seeking engineers with up-to-date, subject-specific knowledge in disciplines like computer engineering, automation, artificial intelligence, and circuit design. Mid-level engineers need to advance their skillsets to apply and integrate these technologies and be competitive.


As applications for new technologies continue to grow, demand for knowledgeable electrical and computer engineers is also on the rise. According to the Bureau of Labor Statistics, job outlook for electrical and electronics engineers—as well as computer hardware engineers—is set to grow 5 percent through 2032. Electrical and computer engineers work in almost every industry. They design systems, work on power transmission and power supplies, run computers and communication systems, innovate chips for embedded and so much more.

To take advantage of this job growth and get more return-on-investment, engineers are advancing their knowledge by going back to school. The 2023 IEEE-USA Salary and Benefits Survey Report shows that engineers with focused master’s degrees (e.g., electrical and computer engineering, electrical engineering, or computer engineering) earned median salaries almost US $27,000 per year higher than their colleagues with bachelors’ degrees alone.


Purdue’s online MSECE program has been ranked in the top 3 of U.S. News and World Report’s Best Online Electrical Engineering Master’s Programs for five years running


Universities like Purdue University work with companies and professionals to provide upskilling opportunities via distance and online education. Purdue has offered a distance Master of Science in Electrical and Computer Engineering (MSECE) since the 1980s. In its early years, the program’s course lectures were videotaped and mailed to students. Now, “distance” has transformed into “online,” and the program has grown with the web, expanding its size and scope. Today, the online MSECE has awarded master’s degrees to 190+ online students since the Fall 2021 semester.


A person with shoulder-length brown hair is wearing a black blazer over a dark blouse. They have a silver necklace with a pendant. The background consists of a brick wall.


“Purdue has a long-standing reputation of engineering excellence and Purdue engineers work worldwide in every company, including General Motors, Northrop Grumman, Raytheon, Texas Instruments, Apple, and Sandia National Laboratories among scores of others,” said Lynn Hegewald, the senior program manager for Purdue’s online MSECE. “Employers everywhere are very aware of Purdue graduates’ capabilities and the quality of the education they bring to the job.”


Today, the online MSECE program continues to select from among the world’s best professionals and gives them an affordable, award-winning education. The program has been ranked in the top 3 of U.S. News and World Report’s Best Online Electrical Engineering Master’s Programs for five years running (2020, 2021, 2022, 2023, and 2024).


The online MSECE offers high-quality research and technical skills, high-level analytical thinking and problem-solving skills, and new ideas to help innovate—all highly sought-after, according to one of the few studies to systematically inventory what engineering employers want (information corroborated on occupational guidance websites like O-Net and the Bureau of Labor Statistics).

Remote students get the same education as on-campus students and become part of the same alumni network.

“Our online MSECE program offers the same exceptional quality as our on-campus offerings to students around the country and the globe,” says Prof. Milind Kulkarni, Michael and Katherine Birck Head of the Elmore Family School of Electrical and Computer Engineering. “Online students take the same classes, with the same professors, as on-campus students; they work on the same assignments and even collaborate on group projects.


“Our online MSECE program offers the same exceptional quality as our on-campus offerings to students around the country and the globe” —Prof. Milind Kulkarni, Purdue University


“We’re very proud,” he adds, “that we’re able to make a ‘full-strength’ Purdue ECE degree available to so many people, whether they’re working full-time across the country, live abroad, or serve in the military. And the results bear this out: graduates of our program land jobs at top global companies, move on to new roles and responsibilities at their current organizations, or even continue to pursue graduate education at top PhD programs.”


A person wearing a dark blazer over a light blue, patterned shirt is smiling at the camera and standing indoors with a modern background featuring large windows and wooden panels.


Variety and Quality in Purdue’s MSECE

As they study for their MSECE degrees, online students can select from among a hundred graduate-level courses in their primary areas of interest, including innovative one-credit-hour courses that extend the students’ knowledge. New courses and new areas of interest are always in the pipeline.

Purdue MSECE Area of Interest and Course Options


  • Automatic Control
  • Communications, Networking, Signal and Image Processing
  • Computer Engineering
  • Fields and Optics
  • Microelectronics and Nanotechnology
  • Power and Energy Systems
  • VLSI and Circuit Design
  • Semiconductors
  • Data Mining
  • Quantum Computing
  • IoT
  • Big Data


Heather Woods, a process engineer at Texas Instruments, was one of the first students to enroll and chose the microelectronics and nanotechnology focus area. She offers this advice: “Take advantage of the one credit-hour classes! They let you finish your degree faster while not taking six credit hours every semester.”


Completing an online MSECE from Purdue University also teaches students professional skills that employers value like motivation, efficient time-management, high-level analysis and problem-solving, and the ability to learn quickly and write effectively.

“Having an MSECE shows I have the dedication and knowledge to be able to solve problems in engineering,” said program alumnus Benjamin Francis, now an engineering manager at AkzoNobel. “As I continue in my career, this gives me an advantage over other engineers both in terms of professional advancement opportunity and a technical base to pull information from to face new challenges.”


Finding Tuition Assistance

Working engineers contemplating graduate school should contact their human resources departments and find out what their tuition-assistance options are. Does your company offer tuition assistance? What courses of study do they cover? Do they cap reimbursements by course, semester, etc.? Does your employer pay tuition directly, or will you pay out-of-pocket and apply for reimbursement?

Prospective U.S. students who are veterans or children of veterans should also check with the U.S. Department of Veterans Affairs to see if they qualify to for tuition or other assistance.


The MSECE Advantage

In sum, the online Master’s degree in Electrical and Computer Engineering from Purdue University does an extraordinary job giving students the tools they need to succeed in school and then in the workplace: developing the technical knowledge, the confidence, and the often-overlooked professional skills that will help them excel in their careers.

Quantum Leap: Sydney’s Leading Role in the Next Tech Wave



This is a sponsored article brought to you by BESydney.

Australia plays a crucial role in global scientific endeavours, with a significant contribution recognized and valued worldwide. Despite comprising only 0.3 percent of the world’s population, it has contributed over 4 percent of the world’s published research.

Renowned for collaboration, Australian scientists work across disciplines and with international counterparts to achieve impactful outcomes. Notably excelling in medical sciences, engineering, and biological sciences, Australia also has globally recognized expertise in astronomy, physics and computer science.

As the country’s innovation hub and leveraging its robust scientific infrastructure, world-class universities and vibrant ecosystem, Sydney is making its mark on this burgeoning industry.

The city’s commitment to quantum research and development is evidenced by its groundbreaking advancements and substantial government support, positioning it at the forefront of the quantum revolution.

Sydney’s blend of academic excellence, industry collaboration and strategic government initiatives is creating a fertile ground for cutting-edge quantum advancements.

Sydney’s quantum ecosystem

Sydney’s quantum industry is bolstered by the Sydney Quantum Academy (SQA), a collaboration between four top-tier universities: University of NSW Sydney (UNSW Sydney), the University of Sydney (USYD), University of Technology Sydney (UTS), and Macquarie University. SQA integrates over 100 experts, fostering a dynamic quantum research and development environment.

With strong government backing Sydney is poised for significant growth in quantum technology, with a projected A$2.2 billion industry value and 8,700 jobs by 2030. The SQA’s mission is to cultivate a quantum-literate workforce, support industry partnerships and accelerate the development of quantum technology.

Professor Hugh Durrant-Whyte, NSW Chief Scientist and Engineer, emphasizes Sydney’s unique position: “We’ve invested in quantum for 20 years, and we have some of the best people at the Quantum Academy in Sydney. This investment and talent pool make Sydney an ideal place for pioneering quantum research and attracting global talent.”

Key institutions and innovations

UNSW’s Centre of Excellence for Quantum Computation and Communication Technology is at the heart of Sydney’s quantum advancements. Led by Scientia Professor Michelle Simmons AO, the founder and CEO of Silicon Quantum Computing, this centre is pioneering efforts to develop the world’s first practical supercomputer. This team is at the vanguard of precision atomic electronics, pioneering the fabrication of devices in silicon that are pivotal for both conventional and quantum computing applications and they have created the narrowest conducting wires and the smallest precision transistors.

“We can now not only put atoms in place but can connect complete circuitry with atomic precision.” —Michelle Simmons, Silicon Quantum Computing

Simmons was named 2018 Australian of the Year and won the 2023 Prime Minister’s Prize for Science for her work in creating the new field of atomic electronics. She is an Australian Research Council Laureate Fellow, a Fellow of the Royal Society of London, the American Academy of Arts and Science, the American Association of the Advancement of Science, the UK Institute of Physics, the Australian Academy of Technology and Engineering and the Australian Academy of Science.

In response to her 2023 accolade, Simmons said: “Twenty years ago, the ability to manipulate individual atoms and put them where we want in a device architecture was unimaginable. We can now not only put atoms in place but can connect complete circuitry with atomic precision—a capability that was developed entirely in Australia.”

Standing in a modern research lab with glass walls and wooden lab benches, a man grasps a cylindrical object attached to a robot arm's gripper while a woman operates a control touch-interface tablet. The Design Futures Lab at UNSW in Sydney, Australia, is a hands-on teaching and research lab that aims to inspire exploration, innovation, and research into fabrication, emerging technologies, and design theories.UNSW

Government and industry support

In April 2024, the Australian Centre for Quantum Growth program, part of the National Quantum Strategy, provided a substantial four-year grant to support the quantum industry’s expansion in Australia. Managed by the University of Sydney, the initiative aims to establish a central hub that fosters industry growth, collaboration, and research coordination.

This centre will serve as a primary resource for the quantum sector, enhancing Australia’s global competitiveness by promoting industry-led solutions and advancing technology adoption both domestically and internationally. Additionally, the centre will emphasise ethical practices and security in the development and application of quantum technologies.

Additionally, Sydney hosts several leading quantum startups, such as Silicon Quantum Computing, Quantum Brilliance, Diraq and Q-CTRL, which focus on improving the performance and stability of quantum systems.

Educational excellence

Sydney’s universities are globally recognized for their contributions to quantum research. They nurture future quantum leaders, and their academic prowess attracts top talent and fosters a culture of innovation and collaboration.

Sydney hosts several leading quantum startups, such as Silicon Quantum Computing, Quantum Brilliance, Diraq, and Q-CTRL, which focus on improving the performance and stability of quantum systems.

The UNSW Sydney is, one of Sydney’s universities, ranked among the world’s top 20 universities, and boasts the largest concentration of academics working in AI and quantum technologies in Australia.

UNSW Sydney Professor Toby Walsh is Laureate Fellow and Scientia Professor of Artificial Intelligence at the Department of Computer Science and Engineering at the University of New South Wales. He explains the significance of this academic strength: “Our students and researchers are at the cutting edge of quantum science. The collaborative efforts within Sydney’s academic institutions are creating a powerhouse of innovation that is driving the global quantum agenda.”

Sydney’s strategic investments and collaborative efforts in quantum technology have propelled the city to the forefront of this transformative field. With its unique and vibrant ecosystem, a blend of world-leading institutions, globally respected talent and strong government and industry support, Sydney is well-positioned to lead the global quantum revolution for the benefit of all. For more information on Sydney’s science and engineering industries visit besydney.com.au.

❌
❌