Reading view

There are new articles available, click to refresh the page.

Delivering the next-generation barcode

The world’s first barcode, designed in 1948, took more than 25 years to make it out of the lab and onto a retail package. Since then, the barcode has done much more than make grocery checkouts faster—it has remade our understanding of how physical objects can be identified and tracked, creating a new pace and set of expectations for the speed and reliability of modern commerce.

Nearly eighty years later, a new iteration of that technology, which encodes data in two dimensions, is poised to take the stage. Today’s 2D barcode is not only out of the lab but “open to a world of possibility,” says Carrie Wilkie, senior vice president of standards and technology at GS1 US.

2D barcodes encode substantially more information than their 1D counterparts. This enables them to link physical objects to a wide array of digital resources. For consumers, 2D barcodes can provide a wealth of product information, from food allergens, expiration dates, and safety recalls to detailed medication use instructions, coupons, and product offers. For businesses, 2D barcodes can enhance operational efficiencies, create traceability at the lot or item level, and drive new forms of customer engagement.

An array of 2D barcode types supports the information needs of a variety of industries. The GS1 DataMatrix, for example, is used on medication or medical devices, encoding expiration dates, batch and lot numbers, and FDA National Drug Codes. The QR Code is familiar to consumers who have used one to open a website from their phone. Adding a GS1 Digital Link URI to a QR Code enables it to serve two purposes: as both a traditional barcode for supply chain operations, enabling tracking throughout the supply chain and price lookup at checkout, and also as a consumer-facing link to digital information, like expiry dates and serial numbers.

Regardless of type, however, all 2D barcodes require a business ecosystem backed by data. To capture new value from advanced barcodes, organizations must supply and manage clean, accurate, and interoperable data around their products and materials. For 2D barcodes to deliver on their potential, businesses will need to collaborate with partners, suppliers, and customers and commit to common data standards across the value chain.

Driving the demand for 2D barcodes

Shifting to 2D barcodes—and enabling the data ecosystems behind them—will require investment by business. Consumer engagement, compliance, and sustainability are among the many factors driving this transition.

Real-time consumer engagement: Today’s customers want to feel connected to the brands they interact with and purchase from. Information is a key element of that engagement and empowerment. “When I think about customer satisfaction,” says Leslie Hand, group vice president for IDC Retail Insights, “I’m thinking about how I can provide more information that allows them to make better decisions about their own lives and the things they buy.”

2D barcodes can help by connecting consumers to online content in real time. “If, by using a 2D barcode, you have the capability to connect to a consumer in a specific region, or a specific store, and you have the ability to provide information to that consumer about the specific product in their hand, that can be a really powerful consumer engagement tool,” says Dan Hardy, director of customer operations for HanesBrands, Inc. “2D barcodes can bring brand and product connectivity directly to an individual consumer, and create an interaction that supports your brand message at an individual consumer/product level.”

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Chasing AI’s value in life sciences

Inspired by an unprecedented opportunity, the life sciences sector has gone all in on AI. For example, in 2023, Pfizer introduced an internal generative AI platform expected to deliver $750 million to $1 billion in value. And Moderna partnered with OpenAI in April 2024, scaling its AI efforts to deploy ChatGPT Enterprise, embedding the tool’s capabilities across business functions from legal to research.

In drug development, German pharmaceutical company Merck KGaA has partnered with several AI companies for drug discovery and development. And Exscientia, a pioneer in using AI in drug discovery, is taking more steps toward integrating generative AI drug design with robotic lab automation in collaboration with Amazon Web Services (AWS).

Given rising competition, higher customer expectations, and growing regulatory challenges, these investments are crucial. But to maximize their value, leaders must carefully consider how to balance the key factors of scope, scale, speed, and human-AI collaboration.

The early promise of connecting data

The common refrain from data leaders across all industries—but specifically from those within data-rich life sciences organizations—is “I have vast amounts of data all over my organization, but the people who need it can’t find it.” says Dan Sheeran, general manager of health care and life sciences for AWS. And in a complex healthcare ecosystem, data can come from multiple sources including hospitals, pharmacies, insurers, and patients.

“Addressing this challenge,” says Sheeran, “means applying metadata to all existing data and then creating tools to find it, mimicking the ease of a search engine. Until generative AI came along, though, creating that metadata was extremely time consuming.”

ZS’s global head of the digital and technology practice, Mahmood Majeed notes that his teams regularly work on connected data programs, because “connecting data to enable connected decisions across the enterprise gives you the ability to create differentiated experiences.”

Majeed points to Sanofi’s well-publicized example of connecting data with its analytics app, plai, which streamlines research and automates time-consuming data tasks. With this investment, Sanofi reports reducing research processes from weeks to hours and the potential to improve target identification in therapeutic areas like immunology, oncology, or neurology by 20% to 30%.

Achieving the payoff of personalization

Connected data also allows companies to focus on personalized last-mile experiences. This involves tailoring interactions with healthcare providers and understanding patients’ individual motivations, needs, and behaviors.

Early efforts around personalization have relied on “next best action” or “next best engagement” models to do this. These traditional machine learning (ML) models suggest the most appropriate information for field teams to share with healthcare providers, based on predetermined guidelines.

When compared with generative AI models, more traditional machine learning models can be inflexible, unable to adapt to individual provider needs, and they often struggle to connect with other data sources that could provide meaningful context. Therefore, the insights can be helpful but limited.  

Sheeran notes that companies have a real opportunity to improve their ability to gain access to connected data for better decision-making processes, “Because the technology is generative, it can create context based on signals. How does this healthcare provider like to receive information? What insights can we draw about the questions they’re asking? Can their professional history or past prescribing behavior help us provide a more contextualized answer? This is exactly what generative AI is great for.”

Beyond this, pharmaceutical companies spend millions of dollars annually to customize marketing materials. They must ensure the content is translated, tailored to the audience and consistent with regulations for each location they offer products and services. A process that usually takes weeks to develop individual assets has become a perfect use case for generative copy and imagery. With generative AI, the process is reduced to from weeks to minutes and creates competitive advantage with lower costs per asset, Sheeran says.

Accelerating drug discovery with AI, one step at a time

Perhaps the greatest hope for AI in life sciences is its ability to generate insights and intellectual property using biology-specific foundation models. Sheeran says, “our customers have seen the potential for very, very large models to greatly accelerate certain discrete steps in the drug discovery and development processes.” He continues, “Now we have a much broader range of models available, and an even larger set of models coming that tackle other discrete steps.”

By Sheeran’s count, there are approximately six major categories of biology-specific models, each containing five to 25 models under development or already available from universities and commercial organizations.

The intellectual property generated by biology-specific models is a significant consideration, supported by services such as Amazon Bedrock, which ensures customers retain control over their data, with transparency and safeguards to prevent unauthorized retention and misuse.

Finding differentiation in life sciences with scope, scale, and speed

Organizations can differentiate with scope, scale, and speed, while determining how AI can best augment human ingenuity and judgment. “Technology has become so easy to access. It’s omnipresent. What that means is that it’s no longer a differentiator on its own,” says Majeed. He suggests that life sciences leaders consider:

Scope: Have we zeroed in on the right problem? By clearly articulating the problem relative to the few critical things that could drive advantage, organizations can identify technology and business collaborators and set standards for measuring success and driving tangible results.

Scale: What happens when we implement a technology solution on a large scale? The highest-priority AI solutions should be the ones with the most potential for results.Scale determines whether an AI initiative will have a broader, more widespread impact on a business, which provides the window for a greater return on investment, says Majeed.

By thinking through the implications of scale from the beginning, organizations can be clear on the magnitude of change they expect and how bold they need to be to achieve it. The boldest commitment to scale is when companies go all in on AI, as Sanofi is doing, setting goals to transform the entire value chain and setting the tone from the very top.

Speed: Are we set up to quickly learn and correct course? Organizations that can rapidly learn from their data and AI experiments, adjust based on those learnings, and continuously iterate are the ones that will see the most success. Majeed emphasizes, “Don’t underestimate this component; it’s where most of the work happens. A good partner will set you up for quick wins, keeping your teams learning and maintaining momentum.”

Sheeran adds, “ZS has become a trusted partner for AWS because our customers trust that they have the right domain expertise. A company like ZS has the ability to focus on the right uses of AI because they’re in the field and on the ground with medical professionals giving them the ability to constantly stay ahead of the curve by exploring the best ways to improve their current workflows.”

Human-AI collaboration at the heart

Despite the allure of generative AI, the human element is the ultimate determinant of how it’s used. In certain cases, traditional technologies outperform it, with less risk, so understanding what it’s good for is key. By cultivating broad technology and AI fluency throughout the organization, leaders can teach their people to find the most powerful combinations of human-AI collaboration for technology solutions that work. After all, as Majeed says, “it’s all about people—whether it’s customers, patients, or our own employees’ and users’ experiences.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Avoiding value decay in digital transformation

Mission-critical digital transformation projects too often end with a whimper rather than a bang. An estimated three-quarters of corporate transformation efforts fail to deliver their intended return on investment.

Given the rapidly evolving technology landscape, companies often struggle to deliver short-term results while simultaneously reinventing the organization and keeping the business running day-to-day. Post-implementation, some companies cannot even perform basic functions like processing orders efficiently or closing the books quickly at the end of a quarter. The problem: Leaders often fail to consider how to sustain value creation over time as programs scale from the pilot phase to wide-scale execution.

“Most implementations are viewed as IT projects,” says Tim Hertzig, a principal in Deloitte’s Technology practice and global product owner of Deloitte’s Ascend digital transformation solution. “These projects fail to achieve the value they initially aspire to, because they don’t factor in change management that ensures adoption and they don’t consider industry-leading practices.”’

Technology rarely drives value alone, according to Kristi Kaplan, Deloitte principal and US executive sponsor of Deloitte’s Ascend platform. “Rather it’s how technology is implemented and adopted in an organization that actually creates the value,” she says. To deliver business results that gain momentum rather than fade away, executives need a long-term transformation plan.

According to Deloitte’s analysis, the right combination of digital transformation actions can unlock as much as $1.25 trillion in additional market capitalization across all Fortune 500 companies. On the other hand, implementing digital change for its own sake without a strategy and technology-aligned investments—“random acts of digital”—could cost firms $1.5 trillion.

Best practices for implementation

To unlock this potential value, there are a number of best practices leading companies use to design and execute digital transformations successfully, Deloitte has found. Three stand out:

Ensure inclusive governance: Project governance needs to span business, HR, finance, and IT stakeholders, creating transparency in reporting and decision-making to maintain forward momentum. Successful projects are jointly owned; all executives understand where they are in the project lifecycle and what decisions need to be made to keep the program moving.

“Where that transparency doesn’t exist, or where all the stakeholders are not at the table and do not feel ownership in these programs, the result can be an IT organization that’s driving what truly needs to be a business transformation,” says Kaplan. “When business leaders fail to own things like change management, technology adoption, and organizational retraining, the risk profile goes way up.”

“Executives need the assurance and the visibility that the ROI of their technology investments is being realized, and when there are risks, they need transparency before problems grow into full blown issues,” Hertzig adds. “That transparency becomes embedded into the governance rhythms of an organization.”

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Investing in AI to build next-generation infrastructure

The demand for new and improved infrastructure across the world is not being met. The Asian Development Bank has estimated that in Asia alone, roughly $1.7 trillion needs to be invested annually through to 2030 just to sustain economic growth and offset the effects of climate change. Globally, that figure has been put at $15 trillion.

In the US, for example, it is no secret that the country’s highways, railways and bridges are in need of updating. But similar to many other sectors, there are significant shortages in skilled workers and resources, which delays all-important repairs and maintenance and harms efficiency.

This infrastructure gap – the difference between funding and construction – is vast. And while governments and companies everywhere are feeling the strain of constructing an energy efficient and sustainable built environment, it’s proving more than humans can do alone. To redress this imbalance, many organizations are turning to various forms of AI, including large language models (LLMs) and machine learning (ML). Collectively, they are not yet able to fix all current infrastructure problems but they are already helping to reduce costs, risks, and increase efficiency.

Overcoming resource constraints

A shortage of skilled engineering and construction labor is a major problem. In the US, it is estimated that there will be a 33% shortfall in the supply of new talent by 2031, with unfilled positions in software, industrial, civil and electrical engineering. Germany reported a shortage of 320,000 science, technology, engineering, and mathematics (STEM) specialists in 2022 and another engineering powerhouse, Japan, has forecast a deficit of more than 700,000 engineers by 2030. Considering the duration of most engineering projects (repairing a broken gas pipeline for example, can take decades), the demand for qualified engineers will only continue to outstrip supply unless something is done.

Immigration and visa restrictions for international engineering students, and a lack of retention in formative STEM jobs, exert additional constraints. Plus, there is the issue of task duplication which is something AI can do with ease.

Julien Moutte, CTO of Bentley Systems explains, “There’s a massive amount of work that engineers have to do that is tedious and repetitive. Between 30% to 50% of their time is spent just compressing 3D models into 2D PDF formats. If that work can be done by AI-powered tools, they can recover half their working time which could then be invested in performing higher value tasks.”

With guidance, AI can automate the same drawings hundreds of times. Training engineers to ask the right questions and use AI optimally will ease the burden and stress of repetition.

However, this is not without challenges. Users of ChatGPT, or other LLMs, know the pitfalls of AI hallucinations, where the model can logically predict a sequence of words but without contextual understanding of what the words mean. This can lead to nonsensical outputs, but in engineering, hallucinations can sometimes be altogether more risky. “If a recommendation was made by AI, it needs to be validated,” says Moutte. “Is that recommendation safe? Does it respect the laws of physics? And it’s a waste of time for the engineers to have to review all these things.”

But this can be offset by having existing company tools and products running simulations and validating the designs using established engineering rules and design codes which again relieves the burden of having the engineers having to do the validating themselves.

Improving resource efficiency

An estimated 30% of building materials, such as steel and concrete, are wasted on a typical construction site in the United States and United Kingdom, with the majority ending up in landfills, although countries such as Germany and The Netherlands have recently implemented recycling measures. This, and the rising cost of raw materials, is putting pressure on companies to think of solutions to improve construction efficiency and sustainability.

AI can provide solutions to both of these issues during the design and construction phases. Digital twins can help workers spot deviations in product quality even and provide the insights needed to minimize waste and energy output and crucially, save money.

Machine learning models use real-time data from field statistics and process variables to flag off-spec materials, product deviations and excess energy usage, such as machinery and transportation for construction site workers. Engineers can then anticipate the gaps and streamline the processes, making large-scale overall improvements for each project which can be replicated in the future.

“Being able to anticipate and reduce that waste with that visual awareness, with the application of AI to make sure that you are optimizing those processes and those designs and the resources that you need to construct that infrastructure is massive,” says Moutte.

He continues, “The big game changer is going to be around sustainability because we need to create infrastructure with more sustainable and efficient designs, and there’s a lot of room for improvement.” And an important part of this will be how AI can help create new materials and models to reduce waste.

Human and AI partnership

AI might never be entirely error-free, but for the time being, human intervention can catch mistakes. Although there may be some concern in the construction sector that AI will replace humans, there are elements to any construction project that only people can do.

AI lacks the critical thinking and problem-solving that humans excel at, so additional training for engineers to supervise and maintain the automated systems is key so that each side can work together optimally. Skilled workers have creativity and intuition, as well as customer service expertise, while AI is not yet capable of such novel solutions.

With the engineers implementing appropriate guardrails and frameworks, AI can contribute the bulk of automation and repetition to projects, thereby creating a symbiotic and optimal relationship between humans and machines.

“Engineers have been designing impressive buildings for decades already, where they are not doing all the design manually. You need to make sure that those structures are validated first by engineering principles, physical rules, local codes, and the rest. So we have all the tools to be able to validate those designs,” explains Moutte.

As AI advances alongside human care and control, it can help futureproof the construction process where every step is bolstered by the strengths of both sides. By addressing the concerns of the construction industry – costs, sustainability, waste and task repetition – and upskilling engineers to manage AI to address these at the design and implementation stage, the construction sector looks set to be less riddled with potholes.

“We’ve already seen how AI can be used to create new materials and reduce waste,” explains Moutte. “As we move to 2050, I believe engineers will need those AI capabilities to create the best possible designs and I’m looking forward to releasing some of those AI-enabled features in our products.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Transforming software with generative AI

Generative AI’s promises for the software development lifecycle (SDLC)—code that writes itself, fully automated test generation, and developers who spend more time innovating than debugging—are as alluring as they are ambitious. Some bullish industry forecasts project a 30% productivity boost from AI developer tools, which, if realized, could inject more than $1.5 trillion into the global GDP.

But while there’s little doubt that software development is undergoing a profound transformation, separating the hype and speculation from the realities of implementation and ROI is no simple task. As with previous technological revolutions, the dividends won’t be instant. “There’s an equivalency between what’s going on with AI and when digital transformation first happened,” observes Carolina Dolan Chandler, chief digital officer at Globant. “AI is an integral shift. It’s going to affect every single job role in every single way. But it’s going to be a long-term process.”

Where exactly are we on this transformative journey? How are enterprises navigating this new terrain—and what’s still ahead? To investigate how generative AI is impacting the SDLC, MIT Technology Review Insights surveyed more than 300 business leaders about how they’re using the technology in their software and product lifecycles.

The findings reveal that generative AI has rich potential to revolutionize software development, but that many enterprises are still in the early stages of realizing its full impact. While adoption is widespread and accelerating, there are significant untapped opportunities. This report explores the projected course of these advancements, as well as how emerging innovations, including agentic AI, might bring about some of the technology’s loftier promises.

Key findings include the following:

Substantial gains from generative AI in the SDLC still lie ahead. Only 12% of surveyed business leaders say that the technology has “fundamentally” changed how they develop software today. Future gains, however, are widely anticipated: Thirty-eight percent of respondents believe generative AI will “substantially” change the SDLC across most organizations in one to three years, and another 31% say this will happen in four to 10 years.

Use of generative AI in the SDLC is nearly universal, but adoption is not comprehensive. A full 94% of respondents say they’re using generative AI for software development in some capacity. One-fifth (20%) describe generative AI as an “established, well-integrated part” of their SDLC, and one-third (33%) report it’s “widely used” in at least part of their SDLC. Nearly one-third (29%), however, are still “conducting small pilots” or adopting the technology on an individual-employee basis (rather than via a team-wide integration).

Generative AI is not just for code generation. Writing software may be the most obvious use case, but most respondents (82%) report using generative AI in at least two phases of the SDLC, and one-quarter (26%) say they are using it across four or more. The most common additional use cases include designing and prototyping new features, streamlining requirement development, fast-tracking testing, improving bug detection, and
boosting overall code quality.

Generative AI is already meeting or exceeding expectations in the SDLC. Even with this room to grow in how fully they integrate generative AI into their software development workflows, 46% of survey respondents say generative AI is already meeting expectations, and 33% say it “exceeds” or “greatly exceeds” expectations.

AI agents represent the next frontier. Looking to the future, almost half (49%) of leaders believe advanced AI tools, such as assistants and agents, will lead to efficiency gains or cost savings. Another 20% believe such tools will lead to improved throughput or faster time to market.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Cloud transformation clears businesses for digital takeoff

In an age where customer experience can make or break a business, Cathay Pacific is embracing cloud transformation to enhance service delivery and revolutionize operations from the inside out. It’s not just technology companies that are facing pressure to deliver better customer service, do more with data, and improve agility. An almost 80-year-old airline, Cathay Pacific embarked on its digital transformation journey in 2014, spurred by a critical IT disruption that became the catalyst for revamping their technology.

By embracing the cloud, the airline has not only streamlined operations but also paved the way for innovative solutions like DevSecOps and AI integration. This shift has enabled Cathay to deliver faster, more reliable services to both passengers and staff, while maintaining a robust security framework in an increasingly digital world. 

According to Rajeev Nair, general manager of IT infrastructure and security at Cathay Pacific, becoming a digital-first airline was met with early resistance from both business and technical teams. The early stages required a lot of heavy lifting as they shifted legacy apps first from their server room to a dedicated data center and then to the cloud. From there began the process of modernization that Cathay Pacific, now in its final stages of this transformation, continues to fine tune.

The cloud migration also helped Cathay align with their ESG goals. “Two years ago, if you asked me what IT could do for sustainability, we would’ve been clueless,” says Nair. However, through cloud-first strategies and green IT practices, the airline has made notable strides in reducing its carbon footprint. Currently, the business is in the process of moving to a smaller data center, reducing physical infrastructure and its carbon emissions significantly by 2025.

The broader benefits of this cloud transformation for Cathay Pacific go beyond sustainability. Agility, time-to-market, and operational efficiency have improved drastically. “If you ask many of the enterprises, they would probably say that shifting to the cloud is all about cost-saving,” says Nair. “But for me, those are secondary aspects and the key is about how to enable the business to be more agile and nimble so that the business capability could be delivered much faster by IT and the technology team.”

By 2025, Cathay Pacific aims to have 100% of their business applications running on the cloud, significantly enhancing their agility, customer service, and cost efficiency, says Nair.

As Cathay Pacific continues its digital evolution, Nair remains focused on future-proofing the airline through emerging technologies. Looking ahead, he is particularly excited about the potential of AI, generative AI, and virtual reality to further enhance both customer experience and internal operations. From more immersive VR-based training for cabin crew to enabling passengers to preview in-flight products before boarding, these innovations are set to redefine how the airline engages with its customers and staff. 

“We have been exploring that for quite some time, but we believe that it will continue to be a mainstream technology that can change the way we serve the customer,” says Nair.

This episode of Business Lab is produced in association with Infosys Cobalt.

Full Transcript 

Megan Tatum: From MIT Technology Review, I’m Megan Tatum, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. 

Our topic today is cloud transformation to meet business goals and customer needs. It’s not just tech companies that have to stay one step ahead. Airlines too are under pressure to deliver better customer service, do more with data, and improve agility. 

Two words for you: going further. 

My guest is Rajeev Nair, who is the general manager of IT infrastructure and security at Cathay Pacific. This podcast is produced in association with Infosys Cobalt. Welcome, Rajeev. 

Rajeev Nair: Thank you. Thank you, Megan. Thank you for having me. 

Megan: Thank you ever so much for joining us. Now to get some context for our conversation today, could you first describe how Cathay Pacific’s digital transformation journey began, and explain, I guess, what stage of this transformation this almost 80-year-old airline is currently in, too? 

Rajeev: Sure, definitely Megan. So for Cathay, we started this transformation journey probably a decade back, way back in 2014. It all started with facing some major service disruption within Cathay IT where it had a massive impact on the business operation. That prompted us to trigger and initiate this transformation journey. So the first thing is we started looking at many of our legacy applications. Back in those days we still had mainframe systems that provided so many of our critical services. We started looking at migrating those legacy apps first, moving them outside of that legacy software and moving them into a proper data center. Back in those days, our data center used to be our corporate headquarters. We didn’t have a dedicated data center and it used to be in a server room. So those were the initial stages of our transformation journey, just a basic building block. So we started moving into a proper data center so that resilience and availability could be improved. 

And as a second phase, we started looking at the cloud. Those days, cloud was just about to kick off in this part of the world. We started looking at migrating to the cloud and it has been a huge challenge or resistance even from the business as well as from the technology team. Once we started moving, shifting apps to the cloud, we had multiple transformation programs to do that modernization activities. Once that is done, then the third phase of the journey is more about your network. Once your applications are moved to the cloud, your network design needs to be completely changed. Then we started looking at how we could modernize our network because Cathay operates in about 180 regions across the world. So our network is very crucial for us. We started looking at redesigning our network. 

And then, it comes to your security aspects. Things moving to the cloud, your network design is getting changed, your cybersecurity needs heavy lifting to accommodate the modern world. We started focusing on cybersecurity initiatives where our security posture has been improved a lot over the last few years. And with those basic building blocks done on the hardware and on the technology side, then comes your IT operations. Because one is your hardware and software piece, but how do you sustain your processes to ensure that it can support those changing technology landscapes? We started investing a lot around the IT operations side, but things like ITIL processes have been revisited. We started adopting many of the DevOps and the DevSecOps practices. So a lot of emphasis around processes and practices to help the team move forward, right? 

And those operations initiatives are in phase. As we stand today, we are at the final stage of our cloud journey where we are looking at how we can optimize it better. So we shifted things to the cloud and that has been a heavy lifting that has been done in the early phases. Now we are focusing around how we can rewrite or refactor your application so that it can better liberate your cloud technologies where we could optimize the performance, thereby optimizing your usage and the cloud resources wherein you could save on the cost as well as on the sustainability aspect. That is where we stand. By 2025, we are looking at moving 100% of our business applications to the cloud and also reducing our physical footprint in our data centers as well. 

Megan: Fantastic. And you mentioned sustainability there. I wonder how does the focus on environmental, social, and governance goals or ESG tie into your wider technology strategy? 

Rajeev: Sure. And to be very honest, Megan, if you asked me this question two years back, we would’ve been clueless on what IT could do from a sustainability aspect. But over the last two years, there has been a lot of focus around ESG components within the technology space where we have done a lot of initiatives since last year to improve and be efficient on the sustainability front. So a couple of key areas that we have done. One is definitely the cloud-first strategy where adopting the cloud-first policy reduces your carbon footprint and it also helps us in migrating away from our data center. So as we speak, we are doing a major project to further reduce our data center size by relocating to a much smaller data center, which will be completed by the end of next year. That will definitely help us to reduce our footprint. 

The second is around adopting the various green IT practices, things like energy efficient devices, be it your PCs or the laptop or virtualizations, and e-based management policies and management aspects. Some of the things are very basic and fundamental in nature. Stuff like we moved away from a dual monitor to a single monitor wherein we could reduce your energy consumption by half, or changing some of your software policies like screen timeouts and putting a monitor in standby. Those kinds of basic things really helped us to optimize and manage. And the last one is around FinOps. So FinOps is a process in the practice that is being heavily adopted in the cloud organization, but it is just not about optimizing your course because by adopting the FinOps practices and tying in with the GreenOps processes, we are able to focus a lot around reducing our CO2 footprint and optimizing sustainability. Those are some of the practices that we have been doing with Cathay. 

Megan: Yeah. fantastic benefits from relatively small changes there. Other than ESG, what are the other benefits for an enterprise like Cathay Pacific in terms of shifting from those legacy systems to the cloud that you found? 

Rajeev: For me, the key is about agility and time-to-market capability. If you ask many of the enterprises, they would probably say that shifting to the cloud is all about cost-saving. But for me, those are secondary aspects. The key is about how to enable the business to be more agile and nimble so that the business capability can be delivered much faster by IT and the technology team. So as an example, gone are the days when we take about a few months before we provision hardware and have the platform and the applications ready. Now the platforms are being delivered to the developers within an hour’s time so that the developers can quickly build their development environment and be ready for development and testing activities. Right? So agility is a key and the number one factor. 

The second is by shifting to the cloud, you’re also liberating many of the latest technologies that the cloud comes up with and the provider has to offer. Things like capacity and the ability to scale up and down your resources and services according to your business needs and fluctuations are a huge help from a technology aspect. That way you can deliver customer-centered solutions faster and more efficiently than many of our airline customers and competitors. 

And the last one is, of course, your cost saving aspect and the operational efficiency. By moving away from the legacy systems, we can reduce a lot of capex [capital expenditure]. Like, say for example, I don’t need to spend money on investing in hardware and spend resources to manage those hardware and data center operations, especially in Hong Kong where human resources are pretty expensive and scarce to find. It is very important that I rely on these sorts of technologies to manage those optimally. Those are some of the key aspects that we see from a cloud adoption perspective. 

Megan: Fantastic. And it sounds like it’s been a several year process so far. So after what sounds like pretty heavy investment when it comes to moving legacy hardware on-prem systems to the cloud. What’s your approach now to adapting your IT operations off the back of that? 

Rajeev: Exactly. That is, sort of, just based early in my transformation journey, but yeah, absolutely. By moving to the cloud, it is just not about the hardware, but it’s also about how your operations and your processes align with this changing technology and new capabilities. And, for example, by adopting more agile and scalable approach to managing IT infrastructures and applications as well. Also leveraging the data and insights that the cloud enables. To achieve this, the fundamental aspect of this is how you can revisit and fine tune your IT service management processes, and that is where your core of IT operations have been built in the past. And to manage that properly we recently, I think, over the last three years we were looking at implementing a new IT service management solution, which is built on a product called ServiceNow. So they are built on the core ITIL processes framework to help us manage the service management, the operations management, and asset management. 

Those are some of the capabilities which we rolled out with the help of our partners like Infosys so that it could provide a framework to fine tune and optimize IT processes. And we also adopted things like DevOps and DevSecOps because what we have also noticed is the processes like ITIL, which was very heavy over the last few years around support activities is also shifting. So we wanted to adopt some of these development practices into the support and operations functions to be more agile by shifting left some of these capabilities. And in this journey, Infosys has been our key partner, not only on the cloud transformation side, but also on implementation of ServiceNow, which is our key service management tool where they provided us end-to-end support starting from the planning phase or the initial conceptual phase and also into the design and development and also to the deployment and maintenance. We haven’t completed this journey and it’s still a project that is currently ongoing, and by 2025 we should be able to complete this successfully across the enterprise. 

Megan: Fascinating. It’s an awful lot of change going on. I mean, there must be an internal shift, therefore, that comes with cloud transformation too, I imagine. I wonder, what’s your approach been to up skilling your team to help it excel in this new way of working? 

Rajeev: Yeah, absolutely. And that is always the hardest part. You can change your technology and processes is but changing your people, that’s always toughest and the hardest bit. And essentially this is all about change management, and that has been one of our struggles in our early part of the cloud transformation journey. What we did is we invested a lot in terms of uplifting our traditional infrastructure team. All the traditional technology teams have to go through that learning curve in adopting cloud technology early in our project. And we also provided a lot of training programs, including some of our cloud partners were able to up skill and train these resources. 

But the key differences that we are seeing is even after providing all those training and upskilling programs, we could see that there was a lot of resistance and a lot of doubts in people’s mind about how cloud is going to help the organization. And the best part is what we did is we included these team members into our project so that they get the hands-on experience. And once they start seeing the benefits around these technologies, there was no looking back. And the team was able to completely embrace the cloud technologies to the point that we still have a traditional technology team who’s supporting the remaining hardware and the servers of the world, but they’re also very keen to shift across the line and adopt and embrace the cloud technology. But it’s been quite a journey for us. 

Megan: That’s great to hear that you’ve managed to bring them along with you. And I suppose it’d be remiss of me if we’re talking about embracing new technologies not to talk about AI, although still in its early stages in most industries. I wonder how is Cathay Pacific approaching AI adoption as well? 

Rajeev: Sure. I think these days none of these conversations can be complete without talking about AI and gen AI. We started this early exploratory phase early into the game, especially in this part of the world. But for us, the key is approaching this based on the customer’s pain points and business needs and then we work backward to identify what type of AI is best suitable or relevant to us. In Cathay, currently, we focus on three main types of AI. One is of course conversational AI. Essentially, it is a form of an internal and external chatbot. Our chatbot, we call it Vera, serves customers directly and can handle about 50% of the inquiries successfully. And just about two weeks back, we upgraded the LLM with a new model, the chatbot with a new model, which is able to be more efficient and much more responsive in terms of the human work. So that’s one part of the AI that we heavily invested on. 

Second is RPA, or robotic process automation, especially what you’re seeing is during the pandemic and post-Covid era, there is limited resources available, especially in Hong Kong and across our supply chain. So RPA or the robotic processes helps to automate mundane repetitive tasks, which doesn’t only fill the resource gap, but it also directly enhances the employee experience. And so far in Cathay, we have about a hundred bots in production serving various business units, serving approximately 30,000 hours every year of human activity. So that’s the second part. 

The third one is around ML and it’s the gen AI. So like our digital team or the data science team has developed about 70-plus ML models in Cathay that turned the organization data into insights or actionable items. These models help us to make a better decision. For example, what meals to be loaded into the aircraft and specific routes, in terms of what quantity and what kind of product offers we promote to customers, and including the fare loading and the pricing of our passenger as well as a cargo bay space. There is a lot of exploration that is being done in this space as well. And a couple of examples I could relate is if you ever happen to come to Hong Kong, next time at the airport, you could hear the public announcement system and that is also AI-powered recently. In the past, our staff used to manually make those announcements and now it has been moved away and has been moved into AI-powered voice technology so that we could be consistent in our announcement. 

Megan: Oh, fantastic. I’ll have to listen for it next time I’m at Hong Kong airport. And you’ve mentioned this topic a couple of times in the conversation. Look, when we’re talking about cloud modernization, cybersecurity can be a roadblock to agility, I guess, if it’s not managed effectively. So could you also tell us in a little more detail how Cathay Pacific has integrated security into its digital transformation journey, particularly with the adoption of development security operations practices that you’ve mentioned? 

Rajeev: Yeah, this is an interesting one. I look after cybersecurity as well as the infrastructure services. With both of these critical functions around my hand, I need to be mindful of both aspects, right? Yes, it’s an interesting one and it has changed over the period of time, and I fully understand why cybersecurity practices needs to be rigid because there is a lot of compliance and it is a highly regulated function, but if something goes wrong, as a CISO we are held accountable for those faults. I can understand why the team is so rigid in their practices. And I also understand from a business perspective it could be perceived as a road blocker to agility. 

One of the key aspects that we have done in Cathay is we have been following DevOps for quite a number of years, and recently, I think in the last two years, we started implementing DevSecOps into our STLC [software testing life cycle]. And what it essentially means is rather than the core cybersecurity team being responsible for many of the security testing and those sorts of aspects, we want to shift left some of these capabilities into the developers so that the people who develop the code now are held accountable for the testing and the quality of the output. And they’re also enabled in terms of the cybersecurity process. Right? 

Of course, when we started off this journey, there has been a huge resistance on the security team itself because they don’t really trust the developers trying to do the testing or the testing outputs. But over a period of time with the introduction of various tools and automation that is put in place, this is now getting into a matured stage wherein it is now enabling the upfront teams to take care of all the aspects of security, like threat modeling, code scanning, and the vulnerability testing. But at the end, the security teams would be still validating and act as a sort of a gatekeeper, but in a very light and inbuilt processes. And this way we can ensure that our cloud applications are secure by design and by default they can deliver them faster and more reliably to our customers. And in this entire process, right? 

In the past, security has been always perceived as an accountability of the cybersecurity team. And by enabling the developers of the security aspects, now you have a better ownership in the organization when it comes to cybersecurity and it is building a better cybersecurity culture within the organization. And that, to me, is a key because from a security aspect, we always say that people are your first line of defense and often they’re also the last line of defense. I’m glad that by these processes we are able to improve that maturity in the organization. 

Megan: Absolutely. And you mentioned that obviously cybersecurity is something that’s really important to a lot of customers nowadays as well. I wondered if you could offer some other examples too of how your digital transformation has improved that customer experience in other ways? 

Rajeev: Yeah, definitely. Maybe I can quote a few examples, Megan. One is around our pilots. You would’ve seen when you travel through the airport or in the aircraft that pilots usually carry a briefcase when they load the flight, and you are often probably wondering what exactly they carry. Basically, that contains a bunch of papers. It contains your weather charts, your navigation routes, and the flight plans, the crew details. It’s a whole stack of paper that they have to carry on each and every flight. And in Cathay, by digitization, we have automated that in their processes, where now they carry an iPad instead of a bunch of papers or briefing pack. So that iPad includes all these softwares that is required for the captain to operate the flight in a legally and a safe manner. 

Paperless cockpit operation is nothing new. Many airlines have attempted to do that, but I should say that Cathay has been on the forefront in truly establishing a paperless operation, where many of the other airlines have shown great interest in using our software. That is one aspect from a fly crew perspective. Second, from a customer perspective, we have an app called Customer 360, which is a completely in-house developed model, which has all the customer direct transactions, surveys, or how they interact at the various checkpoints with our crew or at the boarding. You have all this data feed of a particular customer where our agents or the cabin crew can understand the customer’s sentiment and their reaction to service recovery action. 

Say for example, the customer calls up a call center and ask for a refund or miles compensation. Based on the historical usage, we could prioritize the best action to improve the customer satisfaction. We are connected to all these models and enable the frontline teams so that they can use this when they engage with the customer. An example at the airport, our agents will be able to see a lot of useful insights about the customers beyond the basic information like the flight itinerary or the online shopping history at the Cathay shop, et cetera, so that they can see the overall satisfaction level and get additional insights on recommended actions to restore or improve the customer satisfaction level. This is basically used by our frontline agents at the airport, our cabin crew as well as all the airport team, and the customer team so that they have great consistency in the service no matter what touchpoint the customers are choosing to contact us. 

Megan: Fantastic. 

Rajeev: So these are a few example looking from a back end as well as from a front line of the team perspective. 

Megan: Yeah, absolutely. I’m sure there’s a few people listening who were wondering what pilots carry in that suitcase. So thank you so much for clearing that up. And finally, Rajeev, I guess looking ahead, what emerging technologies are you excited to explore further going forward to enhance digital capabilities and customer experience in the years to come? 

Rajeev: Yeah, so we will continue to explore AI and gen AI capability, which has been the spotlight for the last 18 months or so, be it for the passenger or even for the staff internally. We will continue to explore that. But apart from AI, one other aspect I believe could go at great ways around the AR and the VR capabilities, basically virtual reality. We have been exploring that for quite some time, but we believe that it will continue to be a mainstream technology that can change the way we serve the customer. Say for example, in Cathay, we already have a VR cave for our cabin crew training, virtual reality capabilities, and in a few months’ time, we are actually launching a learning facility based on VR where we could be able to provide more immersive learning experience for the cabin crew and later for the other employees. 

Basically, before a cabin crew is able to operate a flight, they go through a rigorous training in Cathay City in our headquarters, basically to know how to serve our passengers, how to handle an emergency situation, those sorts of aspects. And in many cases, we travel the crew from various outports or various countries back into Hong Kong to train them and equip them for these training activities. You can imagine that costs us a lot of money and effort to bring all the people back to Hong Kong. And by having VR capabilities, we are able to do that anywhere in the world without having that physical presence. That’s one area where it’ll go mainstream. 

The second is around other business units. Apart from the cabin crew, we are also experimenting the VR on the customer front. For example, we are able to launch a new business class seat product we call the Aria Suite by next year. And VR technology will help the customers to visualize the seat details without them able to get on board. So without them flying, even before that, they’re able to experience a product on the ground. At our physical shop in Hong Kong, customers can now use a virtual reality technology to visualize how our designer furniture and lifestyle products fit in the sitting rooms. The list of VR capabilities goes very long. The list goes on. And this is also a great and important way to engage with our customers in particular. 

Megan: Wow. Sounds like some exciting stuff on the way. Thank you ever so much, Rajeev, for talking us through that. That was Rajeev Nair, the general manager of IT infrastructure and security at Cathay Pacific, who I spoke with from an unexpectedly sunny Brighton, England.

That’s it for this episode of Business Lab. I’m your host, Megan Tatum. I’m a contributing editor and host for Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print, on the web, and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com. 

This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review, this episode was produced by Giro Studios. Thanks for listening. 

Data strategies for AI leaders

Organizations are starting the heavy lifting to get real business value from generative AI. As Arnab Chakraborty, chief responsible AI officer at Accenture, puts it, “2023 was the year when clients were amazed with generative AI and the possibilities. In 2024, we are starting to see scaled implementations of responsible generative AI programs.”

Some generative AI efforts remain modest. As Neil Ward-Dutton, vice president for automation, analytics, and AI at IDC Europe, describes it, this is “a classic kind of automation: making teams or individuals more productive, getting rid of drudgery, and allowing people to deliver better results more quickly.” Most companies, though, have much greater ambitions for generative AI: they are looking to reshape how they operate and what they sell.

Great expectations for generative AI

The expectation that generative AI could fundamentally upend business models and product offerings is driven by the technology’s power to unlock vast amounts of data that were previously inaccessible. “Eighty to 90% of the world’s data is unstructured,” says Baris Gultekin, head of AI at AI data cloud company Snowflake. “But what’s exciting is that AI is opening the door for organizations to gain insights from this data that they simply couldn’t before.”

In a poll conducted by MIT Technology Review Insights, global executives were asked about the value they hoped to derive from generative AI. Many say they are prioritizing the technology’s ability to increase efficiency and productivity (72%), increase market competitiveness (55%), and drive better products and services (47%). Few see the technology primarily as a driver of increased revenue (30%) or reduced costs (24%), which is suggestive of executives’ loftier ambitions. Respondents’ top ambitions for generative AI seem to work hand in hand. More than half of companies say new routes toward market competitiveness are one of their top three goals, and the two likely paths they might take to achieve this are increased efficiency and better products or services.

For companies rolling out generative AI, these are not necessarily distinct choices. Chakraborty sees a “thin line between efficiency and innovation” in current activity. “We are starting to notice companies applying generative AI agents for employees, and the use case is internal,” he says, but the time saved on mundane tasks allows personnel to focus on customer service or more creative activities. Gultekin agrees. “We’re seeing innovation with customers building internal generative AI products that unlock a lot of value,” he says. “They’re being built for productivity gains and efficiencies.”

Chakraborty cites marketing campaigns as an example: “The whole supply chain of creative input is getting re-imagined using the power of generative AI. That is obviously going to create new levels of efficiency, but at the same time probably create innovation in the way you bring new product ideas into the market.” Similarly, Gultekin reports that a global technology conglomerate and Snowflake customer has used AI to make “700,000 pages of research available to their team so that they can ask questions and then increase the pace of their own innovation.”

The impact of generative AI on chatbots—in Gultekin’s words, “the bread and butter of the recent AI cycle”—may be the best example. The rapid expansion in chatbot capabilities using AI borders between the improvement of an existing tool and creation of a new one. It is unsurprising, then, that 44% of respondents see improved customer satisfaction as a way that generative AI will bring value.

A closer look at our survey results reflects this overlap between productivity enhancement and product or service innovation. Nearly one-third of respondents (30%) included both increased productivity and innovation in the top three types of value they hope to achieve with generative AI. The first, in many cases, will serve as the main route to the other.

But efficiency gains are not the only path to product or service innovation. Some companies, Chakraborty says, are “making big bets” on wholesale innovation with generative AI. He cites pharmaceutical companies as an example. They, he says, are asking fundamental questions about the technology’s power: “How can I use generative AI to create new treatment pathways or to reimagine my clinical trials process? Can I accelerate the drug discovery time frame from 10 years to five years to one?”

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Addressing climate change impacts

The reality of climate change has spurred enormous public and private investment worldwide, funding initiatives to mitigate its effects and to adapt to its impacts. That investment has spawned entire industries and countless new businesses, resulting in the creation of new green jobs and contributions to economic growth. In the United States, this includes the single largest climate-related investment in the country’s history, made in 2022 as part of the Inflation Reduction Act.

For most US businesses, however, the costs imposed by climate change and the future risks it poses will outweigh growth opportunities afforded by the green sector. In a survey of 300 senior US executives conducted by MIT Technology Review, every respondent agrees that climate change is either harming the economy today or will do so in the future. Most expect their organizations to contend with extreme weather, such as severe storms, flooding, and extreme heat, in the near term. Respondents also report their businesses are already incurring costs related to climate change.

This research examines how US businesses view their climate change risk and the steps they are taking to adapt to climate change’s impacts. The results make clear that climate considerations, such as frequency of extreme weather and access to natural resources, are now a prime factor in businesses’ site location decisions. As climate change accelerates, such considerations are certain to grow in importance.

Key findings include the following:

Businesses are weighing relocation due to climate risks. Most executives in the survey (62%) deem their physical infrastructure (some or all of it) exposed to the impacts of climate change, with 20% reporting it is “very exposed.” A full 75% of respondents report their organization has considered relocating due to climate risk, with 6% indicating they have concrete plans to relocate facilities within the next five years due to climate factors. And 24% report they have already relocated physical infrastructure to prepare for climate change impacts.

Companies must lock in the costs of climate change adaptation. Nearly all US businesses have already suffered from the effects of climate change, judging by the survey. Weighing most heavily thus far, and likely in the future, are increases in operational costs (affecting 64%) and insurance premiums (63%), as well as disruption to operations (61%) and damage to infrastructure (55%).

Executives know climate change is here, and many are planning for it. Four-fifths (81%) of survey respondents deem climate planning and preparedness important to their business, and one-third describe it as very important. There is a seeming lag at some companies, however, at translating this perceived importance into actual planning: only 62% have developed a climate change adaptation plan, and 52% have conducted a climate risk assessment.

Climate-planning resources are a key criterion in site location. When judging a potential new business site on its climate mitigation features, 71% of executives highlight the availability of climate-planning resources as among their top criteria. Nearly two-thirds (64%) also cite the importance of a location’s access to critical natural resources.

Though climate change will affect everyone, its risks and impacts vary by region. No US region is immune to climate change: a majority of surveyed businesses in every region have experienced at least some negative climate change impacts. However, respondents believe the risks are lowest in the Midwest, with nearly half of respondents (47%) naming that region as least exposed to climate change risk.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

❌