Reading view

There are new articles available, click to refresh the page.

Arzeda is using AI to design proteins for natural sweeteners and more

AI is increasingly being applied to protein design, the process of creating new proteins with specific, target characteristics. Protein design’s applications are myriad, but it’s a promising way of discovering drug-based treatments to combat diseases and creating new homecare, agriculture, food-based, and materials products. One among the many vendors developing AI tech to design proteins, […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Amazon's Secret Weapon in Chip Design Is Amazon



Big-name makers of processors, especially those geared toward cloud-based AI, such as AMD and Nvidia, have been showing signs of wanting to own more of the business of computing, purchasing makers of software, interconnects, and servers. The hope is that control of the “full stack” will give them an edge in designing what their customers want.

Amazon Web Services (AWS) got there ahead of most of the competition, when they purchased chip designer Annapurna Labs in 2015 and proceeded to design CPUs, AI accelerators, servers, and data centers as a vertically-integrated operation. Ali Saidi, the technical lead for the Graviton series of CPUs, and Rami Sinno, director of engineering at Annapurna Labs, explained the advantage of vertically-integrated design and Amazon-scale and showed IEEE Spectrum around the company’s hardware testing labs in Austin, Tex., on 27 August.

Saidi and Sinno on:

What brought you to Amazon Web Services, Rami?

an older man in an eggplant colored polo shirt posing for a portrait Rami SinnoAWS

Rami Sinno: Amazon is my first vertically integrated company. And that was on purpose. I was working at Arm, and I was looking for the next adventure, looking at where the industry is heading and what I want my legacy to be. I looked at two things:

One is vertically integrated companies, because this is where most of the innovation is—the interesting stuff is happening when you control the full hardware and software stack and deliver directly to customers.

And the second thing is, I realized that machine learning, AI in general, is going to be very, very big. I didn’t know exactly which direction it was going to take, but I knew that there is something that is going to be generational, and I wanted to be part of that. I already had that experience prior when I was part of the group that was building the chips that go into the Blackberries; that was a fundamental shift in the industry. That feeling was incredible, to be part of something so big, so fundamental. And I thought, “Okay, I have another chance to be part of something fundamental.”

Does working at a vertically-integrated company require a different kind of chip design engineer?

Sinno: Absolutely. When I hire people, the interview process is going after people that have that mindset. Let me give you a specific example: Say I need a signal integrity engineer. (Signal integrity makes sure a signal going from point A to point B, wherever it is in the system, makes it there correctly.) Typically, you hire signal integrity engineers that have a lot of experience in analysis for signal integrity, that understand layout impacts, can do measurements in the lab. Well, this is not sufficient for our group, because we want our signal integrity engineers also to be coders. We want them to be able to take a workload or a test that will run at the system level and be able to modify it or build a new one from scratch in order to look at the signal integrity impact at the system level under workload. This is where being trained to be flexible, to think outside of the little box has paid off huge dividends in the way that we do development and the way we serve our customers.

“By the time that we get the silicon back, the software’s done” —Ali Saidi, Annapurna Labs

At the end of the day, our responsibility is to deliver complete servers in the data center directly for our customers. And if you think from that perspective, you’ll be able to optimize and innovate across the full stack. A design engineer or a test engineer should be able to look at the full picture because that’s his or her job, deliver the complete server to the data center and look where best to do optimization. It might not be at the transistor level or at the substrate level or at the board level. It could be something completely different. It could be purely software. And having that knowledge, having that visibility, will allow the engineers to be significantly more productive and delivery to the customer significantly faster. We’re not going to bang our head against the wall to optimize the transistor where three lines of code downstream will solve these problems, right?

Do you feel like people are trained in that way these days?

Sinno: We’ve had very good luck with recent college grads. Recent college grads, especially the past couple of years, have been absolutely phenomenal. I’m very, very pleased with the way that the education system is graduating the engineers and the computer scientists that are interested in the type of jobs that we have for them.

The other place that we have been super successful in finding the right people is at startups. They know what it takes, because at a startup, by definition, you have to do so many different things. People who’ve done startups before completely understand the culture and the mindset that we have at Amazon.

[back to top]

What brought you to AWS, Ali?

a man with a beard wearing a polka dotted button-up shirt posing for a portrait Ali SaidiAWS

Ali Saidi: I’ve been here about seven and a half years. When I joined AWS, I joined a secret project at the time. I was told: “We’re going to build some Arm servers. Tell no one.”

We started with Graviton 1. Graviton 1 was really the vehicle for us to prove that we could offer the same experience in AWS with a different architecture.

The cloud gave us an ability for a customer to try it in a very low-cost, low barrier of entry way and say, “Does it work for my workload?” So Graviton 1 was really just the vehicle demonstrate that we could do this, and to start signaling to the world that we want software around ARM servers to grow and that they’re going to be more relevant.

Graviton 2—announced in 2019—was kind of our first… what we think is a market-leading device that’s targeting general-purpose workloads, web servers, and those types of things.

It’s done very well. We have people running databases, web servers, key-value stores, lots of applications... When customers adopt Graviton, they bring one workload, and they see the benefits of bringing that one workload. And then the next question they ask is, “Well, I want to bring some more workloads. What should I bring?” There were some where it wasn’t powerful enough effectively, particularly around things like media encoding, taking videos and encoding them or re-encoding them or encoding them to multiple streams. It’s a very math-heavy operation and required more [single-instruction multiple data] bandwidth. We need cores that could do more math.

We also wanted to enable the [high-performance computing] market. So we have an instance type called HPC 7G where we’ve got customers like Formula One. They do computational fluid dynamics of how this car is going to disturb the air and how that affects following cars. It’s really just expanding the portfolio of applications. We did the same thing when we went to Graviton 4, which has 96 cores versus Graviton 3’s 64.

[back to top]

How do you know what to improve from one generation to the next?

Saidi: Far and wide, most customers find great success when they adopt Graviton. Occasionally, they see performance that isn’t the same level as their other migrations. They might say “I moved these three apps, and I got 20 percent higher performance; that’s great. But I moved this app over here, and I didn’t get any performance improvement. Why?” It’s really great to see the 20 percent. But for me, in the kind of weird way I am, the 0 percent is actually more interesting, because it gives us something to go and explore with them.

Most of our customers are very open to those kinds of engagements. So we can understand what their application is and build some kind of proxy for it. Or if it’s an internal workload, then we could just use the original software. And then we can use that to kind of close the loop and work on what the next generation of Graviton will have and how we’re going to enable better performance there.

What’s different about designing chips at AWS?

Saidi: In chip design, there are many different competing optimization points. You have all of these conflicting requirements, you have cost, you have scheduling, you’ve got power consumption, you’ve got size, what DRAM technologies are available and when you’re going to intersect them… It ends up being this fun, multifaceted optimization problem to figure out what’s the best thing that you can build in a timeframe. And you need to get it right.

One thing that we’ve done very well is taken our initial silicon to production.

How?

Saidi: This might sound weird, but I’ve seen other places where the software and the hardware people effectively don’t talk. The hardware and software people in Annapurna and AWS work together from day one. The software people are writing the software that will ultimately be the production software and firmware while the hardware is being developed in cooperation with the hardware engineers. By working together, we’re closing that iteration loop. When you are carrying the piece of hardware over to the software engineer’s desk your iteration loop is years and years. Here, we are iterating constantly. We’re running virtual machines in our emulators before we have the silicon ready. We are taking an emulation of [a complete system] and running most of the software we’re going to run.

So by the time that we get to the silicon back [from the foundry], the software’s done. And we’ve seen most of the software work at this point. So we have very high confidence that it’s going to work.

The other piece of it, I think, is just being absolutely laser-focused on what we are going to deliver. You get a lot of ideas, but your design resources are approximately fixed. No matter how many ideas I put in the bucket, I’m not going to be able to hire that many more people, and my budget’s probably fixed. So every idea I throw in the bucket is going to use some resources. And if that feature isn’t really important to the success of the project, I’m risking the rest of the project. And I think that’s a mistake that people frequently make.

Are those decisions easier in a vertically integrated situation?

Saidi: Certainly. We know we’re going to build a motherboard and a server and put it in a rack, and we know what that looks like… So we know the features we need. We’re not trying to build a superset product that could allow us to go into multiple markets. We’re laser-focused into one.

What else is unique about the AWS chip design environment?

Saidi: One thing that’s very interesting for AWS is that we’re the cloud and we’re also developing these chips in the cloud. We were the first company to really push on running [electronic design automation (EDA)] in the cloud. We changed the model from “I’ve got 80 servers and this is what I use for EDA” to “Today, I have 80 servers. If I want, tomorrow I can have 300. The next day, I can have 1,000.”

We can compress some of the time by varying the resources that we use. At the beginning of the project, we don’t need as many resources. We can turn a lot of stuff off and not pay for it effectively. As we get to the end of the project, now we need many more resources. And instead of saying, “Well, I can’t iterate this fast, because I’ve got this one machine, and it’s busy.” I can change that and instead say, “Well, I don’t want one machine; I’ll have 10 machines today.”

Instead of my iteration cycle being two days for a big design like this, instead of being even one day, with these 10 machines I can bring it down to three or four hours. That’s huge.

How important is Amazon.com as a customer?

Saidi: They have a wealth of workloads, and we obviously are the same company, so we have access to some of those workloads in ways that with third parties, we don’t. But we also have very close relationships with other external customers.

So last Prime Day, we said that 2,600 Amazon.com services were running on Graviton processors. This Prime Day, that number more than doubled to 5,800 services running on Graviton. And the retail side of Amazon used over 250,000 Graviton CPUs in support of the retail website and the services around that for Prime Day.

[back to top]

The AI accelerator team is colocated with the labs that test everything from chips through racks of servers. Why?

Sinno: So Annapurna Labs has multiple labs in multiple locations as well. This location here is in Austin… is one of the smaller labs. But what’s so interesting about the lab here in Austin is that you have all of the hardware and many software development engineers for machine learning servers and for Trainium and Inferentia [AWS’s AI chips] effectively co-located on this floor. For hardware developers, engineers, having the labs co-located on the same floor has been very, very effective. It speeds execution and iteration for delivery to the customers. This lab is set up to be self-sufficient with anything that we need to do, at the chip level, at the server level, at the board level. Because again, as I convey to our teams, our job is not the chip; our job is not the board; our job is the full server to the customer.

How does vertical integration help you design and test chips for data-center-scale deployment?

Sinno: It’s relatively easy to create a bar-raising server. Something that’s very high-performance, very low-power. If we create 10 of them, 100 of them, maybe 1,000 of them, it’s easy. You can cherry pick this, you can fix this, you can fix that. But the scale that the AWS is at is significantly higher. We need to train models that require 100,000 of these chips. 100,000! And for training, it’s not run in five minutes. It’s run in hours or days or weeks even. Those 100,000 chips have to be up for the duration. Everything that we do here is to get to that point.

We start from a “what are all the things that can go wrong?” mindset. And we implement all the things that we know. But when you were talking about cloud scale, there are always things that you have not thought of that come up. These are the 0.001-percent type issues.

In this case, we do the debug first in the fleet. And in certain cases, we have to do debugs in the lab to find the root cause. And if we can fix it immediately, we fix it immediately. Being vertically integrated, in many cases we can do a software fix for it. We use our agility to rush a fix while at the same time making sure that the next generation has it already figured out from the get go.

[back to top]

ClassLink Learning Design Team

The Learning Design Team at ClassLink is the power behind all things educational for internal and external stakeholders. They are the source of all Help Center documentation and ClassLink Academy courses.

ClassLink’s Help Center provides over 450 easy-to-use articles that allow users to find necessary information quickly. Articles are ADA compliant and feature light humor, use of emojis, graphics, videos, and GIFs to illustrate knowledge further.

ClassLink Academy is a comprehensive online training platform designed to provide technical administrators, educational leaders, instructors, and students with top-notch resources. With over 200 micro-courses, the primary goal is to elevate their proficiency and comprehension in utilizing ClassLink’s suite of products effectively.

ClassLink Academy features proven andragogy and pedagogy as well as gamification, multimedia content, certifications, CEUs, and badges.

Both ClassLink Academy and ClassLink’s Help Center have helped users to develop a higher level of proficiency in utilizing ClassLink products. By providing comprehensive training resources, users have gained a deeper understanding of the platform’s features and functionalities, enabling them to navigate and use them more effectively. ClassLink’s Help Desk has also experienced a decrease in help center tickets as users are familiar with how to navigate products on their own effectively.

Due to higher proficiency and confidence powered by knowledge, users have enjoyed improved productivity as they’ve optimized their workflows to streamline their tasks. The increased efficiency has led to more efficient ways to access resources, applications, and data, saving time and effort.

For these reasons and more, ClassLink Learning Design Team has been recognized as The EdTech Trendsetter Awards Winner for “EdTech Group Setting a Trend” as part of The EdTech Awards 2024 from EdTech Digest. Learn more

The post ClassLink Learning Design Team appeared first on EdTech Digest.

VR Comfort Settings Checklist & Glossary for Developers and Players Alike

For those who have been playing or developing VR content for years, it might seem ‘obvious’ what kind of settings are expected to be included for player comfort. Yet for new players and developers alike, the confusing sea of VR comfort terms is far from straightforward. This has lead to situations where players buy a game but find it doesn’t include a comfort setting that’s important to them. So here’s a checklist and glossary of ‘essential’ VR comfort settings that developers should clearly communicate to potential customers about their VR game or experience.

Update July 24th, 2024: Road to VR now offers developers private comfort design audits for XR apps. Your app will get an overall ‘Comfort Grade’ with a straightforward list of comfort issues and suggested fixes. Reach us at consult [at] roadtovr.com for details.

VR Comfort Settings Checklist

Let’s start with the VR comfort settings checklist, using two example games. While it is by no means comprehensive, it covers many of the basic comfort settings employed by VR games today. To be clear, this checklist is not what settings a game should include, it is merely the info that should be communicated so customers know what comfort settings are offered.

Want expert insight on your app’s comfort design? Reach us at consult [at] roadtovr.com to discuss a personalized comfort design audit for your XR app.

ℹ We chose these two examples because a game like Beat Saber, despite being an almost universally comfortable VR game, will have many ‘n/a’ on its list because it completely lacks artificial turning & movement. Whereas a game like Half-Life: Alyx uses artificial turning & movement and therefore offers more options for player comfort.

Half-Life: Alyx
Beat Saber
Turning
Artificial turning ✔ ✖
Snap-turn ✔ n/a
Adjustable increments ✔ n/a
Quick-turn ✖ n/a
Adjustable Increments n/a n/a
Adjustable speed n/a n/a
Smooth-turn ✔ n/a
Adjustable speed ✔ n/a
Movement
Artificial movement ✔ ✖
Teleport-move ✔ n/a
Dash-move ✔ n/a
Smooth-move ✔ n/a
Adjustable speed ✔ n/a
Blinders ✖ n/a
Adjustable strength n/a n/a
Head-based ✔ n/a
Controller-based ✔ n/a
Swappable movement hand ✔ n/a
Posture
Standing mode ✔ ✔
Seated mode ✔ not explicit
Artificial crouch ✔ ✖
Real crouch ✔ ✔
Accessibility
Subtitles ✔ n/a
Languages English, French, German […] n/a
Dialogue audio ✔ n/a
Languages English n/a
Adjustable difficulty ✔ ✔
Two hands required ✖
For some game modes (optional)
Real crouch required ✖ For some levels (optional)
Hearing required ✖ ✖
Adjustable player height ✖ ✔

If players are equipped with this information ahead of time, it will help them make a more informed buying decision.

VR Comfort Settings Glossary

For new players, many of these terms might be confusing. Here’s a glossary of basic definitions of each VR comfort setting.

Want expert insight on your app’s comfort design? Reach us at consult [at] roadtovr.com to discuss a personalized comfort design audit for your XR app.

Turning

  • Artificial turning – whether or not the game allows the player to rotate their view separately from their real-world orientation within their playspace (also called virtual turning)
    • Snap-turn – comfortable for most
      Instantly rotates the camera view in steps or increments (also called blink-turn)
    • Quick-turn – comfortable for some
      Quickly rotates the camera view in steps or increments (also called fast-turn or dash-turn)
    • Smooth-turn – comfortable for least
      Smoothly rotates the camera view (also called continuous-turn)

Movement

  • Artificial movement – whether or not the game allows the player to move through the virtual world separately from their real-world movement within their playspace (also called virtual movement)
    • Teleport-move – comfortable for most
      Instantly moves the player between positions (also called blink-move)
    • Dash-move – comfortable for some
      Quickly moves the player between positions (also called shift-move)
    • Smooth-move – comfortable for least
      Smoothly moves the player through the world (also called continuous-move)
  • Head-based – the game considers the player’s head direction as the ‘forward’ direction for artificial movement
  • Hand-based – the game considers the player’s hand/controller direction as the ‘forward’ direction for artificial movement
  • Swappable movement hand – allows the player to change the artificial movement controller input between the left and right hands
  • Blinders – cropping of the headset’s field of view to reduce motion visible in the player’s periphery (also called vignette)

Posture

  • Standing mode – supports players playing in a real-world standing position
  • Seated mode – supports players playing in a real-world seated position
  • Artificial crouch – allows the player to crouch with a button input instead of crouching in the real world (also called virtual crouch)
  • Real crouch – allows the player to crouch in the real-world and have it correctly reflected as crouching in the game

Accessibility

  • Subtitles – a game that has subtitles for dialogue & interface, and which languages therein
  • Audio – a game that has audio dialogue, and which languages therein
  • Adjustable difficulty – allows the player to control the difficulty of a game’s mechanics
  • Two-hands required – whether two hands are required for core game completion or essential mechanics
  • Real-crouch required – a game which requires the player to physically crouch for core completion or essential mechanics (with no comparable artificial crouch option)
  • Hearing required – a game which requires the player to be able to hear for core completion or essential mechanics
  • Adjustable player height – whether the player can change their in-game height separately from their real world height (distinct from artificial crouching because the adjustment is persistent and may also work in tandem with artificial crouching)

– – — – –

As mentioned, this is not a comprehensive list. VR comfort is a complex topic especially because everyone’s experience is somewhat different, but this is hopefully a useful baseline to help streamline communication between developers and players alike.

For developers exploring various locomotion methods for use in VR content, the Locomotion Vault is a good resource to see real-world examples.

For players with disabilities who want more options for VR game accessibility check out the WalkinVR custom locomotion driver.

The post VR Comfort Settings Checklist & Glossary for Developers and Players Alike appeared first on Road to VR.

Major ‘ShapesXR’ Update Streamlines Collaborative XR Prototyping, Releases Web Editor for PC Users

Spatial design and prototyping app ShapesXR (2021) just launched its 2.0 update which better streamlines cross-platform support, letting team members more easily edit and collaborate in both mixed or virtual reality, but also now the web.

ShapesXR 2.0 is packing in a number of new features today to enhance the cross-platform app, which not only supports Quest 1/2/3/Pro and Pico 4, but also now standard flatscreen devices with the addition of a web editor for users joining with mouse and keyboard.

Check out all of the things coming to ShapesXR 2.0 below:

Enhanced UI/UX : Shapes has been fully refreshed with an entire new interface that takes unique advantage of depth and materials. The information architecture has been simplified to enhance ease of use and learnability.

Interactive Prototyping: New triggers and actions have been introduced to help designers explore more robust interactions, allowing them to use button presses, physical touch, and haptics to design dynamic and engaging spatial experiences.

Spatial Sound Prototyping: Users can now import sounds and add spatial audio to interaction triggers, creating more immersive experiences and prototypes that win the arguments and green lights

Procedural Primitives and New Assets Library: A new library of fully procedural primitives provides a diverse range of 3D models and templates for users to build with.

Custom Inspector: The custom inspector allows for precise adjustments, optimizing the design process.

Performance Optimization: Significant optimizations ensure smoother experiences and faster load times, enhancing overall efficiency.

Flexible Input Support: The new architecture and UI support any input type, including controllers, hands, and mouse and keyboard, making the design process smoother and more intuitive.

Released in 2021, ShapesXR founder and CEO Inga Petryaevskaya calls the addition of the new web editor “a strategic move to extend the time users spend in the product and to enable co-design and editing with those who do not have an XR device.”

To boot, a number of VR studios have used ShapesXR over the years to collaboratively build their apps, including mixed reality piano tutor PianoVision, physics-based VR rollercoaster CoasterMania, and XR platform for molecular design in the Drug Discovery and Materials Science industries Nanome. You can check out the company’s full slate of case studies here.

The app is a free download on supported platforms, including both a free and subscription-based plans. ShapesXR’s free plan comes with its core creation tools, three editable spaces, 150 Mb of cloud storage, 20 Mb import cap on files, the ability to import png, jpg, obj, glb, and gITF files, and export glTF, USDz, and Unity files.

Both its Team and Enterprise plans include unlimited editable spaces, respective bumps in cloud storage, and a host of other features that ought to appeal to larger teams looking to integrate ShapesXR into their workflow. You can check out all of the subscription plans here.

The post Major ‘ShapesXR’ Update Streamlines Collaborative XR Prototyping, Releases Web Editor for PC Users appeared first on Road to VR.

Image courtesy ShapesXR

Educational design and productive failure: stories of creative risk taking

This chapter focuses on the creative risk taking involved in educational design and is an exciting collaboration between DER member Prof Michael Henderson and 12 senior Educational Designers embedded centrally and within 9 Faculties.

Educational designers regularly engage in a process of creative risk taking. Inevitably, some designs result in degrees of failure, which need to be productively managed. Surprisingly, creative risk taking and productive failures are rarely discussed or studied in the field of educational design or educational technology.

Through the analysis of educational designer narratives we identified that there is a broad aversion to openly acknowledging the risks and failures. This was partly due to a drive for narratives of success by institutions and education in general, combined with the often precarious positions of the designers themselves who work in a “third space” beside and between educators and students and who therefore have to establish and sustain the trust of those who they work with.

In this chapter and our subsequent work we have identified seven strategies for educational designers and institutional leaders to promote changes in practice:

  1. Normalize failure: acknowledge failures in every creative success; actively create time to reflect on practice as a habit; leaders at all levels to role model productive framing of failure.
  2. Recognize the emotional labour of failure and vulnerability in engaging with it: acknowledge that it is hard to talk about failure; leaders need to show vulnerability and role model this too; embed emotional intelligence in reflective practice; recognize that educators are often in vulnerable positions as well, feeling at risk in revealing themselves to educational designers.
  3. Involve others and resist internalising failure: include educators, students and other diverse perspectives in the design and reflection cycles; adopt or build a supportive community that engenders the sharing of vulnerability and candour.
  4. Position failure as part of a process: adopt a designerly mindset – finding solutions is an ongoing cycle of design and redesign; define the role of educational design as a creative endeavour, in which failure is explicitly framed as a possibility.
  5. Purposefully build trusting and candid relationships over time: encourage candour through adopting a welcoming and accepting approach to problems, needs and concerns.
  6. Question the validity of success criteria: leaders at all levels need to be critical of measures of successful educational design such as grade outcomes and student satisfaction which are usually confounded with competing factors.
  7. Revise the language surrounding the work of educational design: leaders need to frame the position descriptions, strategic directions and outcome expectations to include concepts of iteration, experimentation, trialling, prototyping and productive failure.

This study reveals that failure is both an inherent risk in creative educational design work, but failure can also be productive.

Below is a poster presentation of our research – offering the seven strategies themtically organised into three themes of strategic action: shaping expectations, redefining processes, and supporting people..

Citation:

Henderson, M., Abramson, P., Bangerter, M., Chen, M., D’Souza, I., Fulcher, J., Halupka, V., Hook, J., Horton, C., Macfarlan, B., Mackay, R., Nagy, K., Schliephake, K., Trebilco, J. & Vu, T. (2022). Educational design and productive failure: the need for a culture of creative risk taking. In Handbook of Digital Higher Education (pp. 14-25). Edward Elgar Publishing. https://doi.org/10.4337/9781800888494.00011

How AI Will Change Chip Design



The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process.

Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version.

But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform.

How is AI currently being used to design the next generation of chips?

Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider.

Portrait of a woman with blonde-red hair smiling at the camera Heather GorrMathWorks

Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI.

What are the benefits of using AI for chip design?

Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design.

So it’s like having a digital twin in a sense?

Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end.

So, it’s going to be more efficient and, as you said, cheaper?

Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering.

We’ve talked about the benefits. How about the drawbacks?

Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it’s not going to be as accurate as that precise model that we’ve developed over the years.

Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It’s a case where you might have models to predict something and different parts of it, but you still need to bring it all together.

One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge.

How can engineers use AI to better prepare and extract insights from hardware or sensor data?

Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start.

One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI.

What should engineers and designers consider when using AI for chip design?

Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team.

How do you think AI will affect chip designers’ jobs?

Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip.

How do you envision the future of AI and chip design?

Gorr: It’s very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.

❌