Reading view

There are new articles available, click to refresh the page.

Neurodevelopmental Disruptions Behind Schizophrenia Cognitive Deficits

This shows a man and fuzzy lines coming from his head.A recent review of genetic and population studies reveals that premorbid cognitive deficits in schizophrenia, such as lower IQ, are largely due to neurodevelopmental disruptions rather than inherited genetic variants that directly increase schizophrenia risk. The findings suggest that non-familial factors, including rare genetic mutations and environmental influences, play a significant role in both cognitive impairments and schizophrenia risk.

How Logical Thinking and Empathy Define Wisdom

This shows a statue.A new study reveals that people across 12 countries and five continents perceive wisdom through two key dimensions: reflective orientation and socio-emotional awareness. Reflective orientation includes logical thinking and emotion control, while socio-emotional awareness focuses on empathy and social context. These dimensions consistently influence how individuals judge wisdom in leaders, scientists, and others. The findings highlight the universal principles that shape perceptions of wisdom and their implications for leadership and education.

Blocking a Brain Pathway Reverses Memory Loss in Alzheimer’s

This shows a person holding a brain model.Blocking the kynurenine pathway, a regulator of brain metabolism, can restore cognitive function in lab mice with Alzheimer’s disease. The pathway is overactivated in Alzheimer’s, disrupting glucose metabolism and starving neurons of energy. By inhibiting this pathway, researchers improved memory and brain plasticity in mice, offering hope for new treatments in humans. IDO1 inhibitors, currently in cancer trials, could be repurposed for Alzheimer’s treatment.

IEEE Introduces Digital Certificates Documenting Volunteer Roles



IEEE Collabratec has made it easier for volunteers to display their IEEE positions. The online networking platform released a new benefit this year for its users: digital certificates for IEEE volunteering. They reflect contributions made to the organization, such as leading a committee or organizing an event.

Members can download the certificates and add them to their LinkedIn profile or résumé. Volunteers also can print their certificates to frame and display in their office.

Each individualized document includes the person’s name, the position they’ve held, and the years served. Every position held has its own certificate. The member’s list of roles is updated annually.

The feature is a result of a top recommendation to improve volunteer recognition made by delegates at the 2023 IEEE Sections Congress, according to Deepak Mathur. The senior member is vice president of IEEE Member and Geographic Activities. The new feature “respects the time and effort of our volunteers and is a testament to the power and versatility of the Collabratec platform,” Mathur said in an announcement.

Members can download their certificates by selecting the Certificates tab on their Collabratec page and scrolling to each of their positions.

To learn more about IEEE Collabratec, check out the user guide, FAQs, and users’ forum.

GLP-1 Drug Slows Alzheimer’s Cognitive Decline

This shows a brain.A recent study suggests that a GLP-1 drug, liraglutide, may protect the brains of people with mild Alzheimer’s disease, slowing cognitive decline by 18% over one year. This effect is likely due to the drug’s ability to reduce brain inflammation, improve insulin resistance, and lower the impact of Alzheimer’s biomarkers.

How “This” and “That” Shape Language and Social Cognition

This shows statues with This and That written on them.A study reveals that demonstratives like 'this' and 'that' not only indicate distance but also direct attention, linking language to social cognition. Researchers found that the meaning of demonstratives varies across languages and is influenced by the listener's focus. This study involved speakers of ten languages and used computational modeling to understand these dynamics. The findings suggest that attention manipulation is an inherent part of language, embedded in demonstratives.

Researchers Use Game of Thrones to Study Face Blindness

This shows the Iron throne.Researchers used Game of Thrones to study how the brain recognizes faces, providing insights into prosopagnosia, a condition affecting facial recognition in 1 in 50 people. MRI scans showed increased brain activity in regions associated with character knowledge in fans of the series, but reduced activity in those unfamiliar with the show and in prosopagnosia patients.

Daily Naps and Brain Training Reduce Dementia Risk

This shows an older man sleeping.Exercising our brains with daily habits like naps and memory workouts, instead of relying on smartphones, can reduce the risk of age-related dementia. A new study highlights the superiority of human intelligence over AI. The study emphasizes nurturing our brain's potential for healthy aging. It also offers practical tips for boosting brain power and maintaining real intelligence.

Making Us Blush: Study Explores Blushing Mechanisms

This shows a woman blushing.Researchers investigated the neural basis of blushing using MRI scans and cheek temperature measurements. The study found that blushing activates the cerebellum and early visual areas, but not regions linked to understanding mental states. This suggests blushing may be an automatic emotional response rather than a cognitive one. The findings could help address social anxiety related to blushing.

Infants Use Mom’s Scent to Recognize Faces

This shows a mom and baby.A new study reveals that infants use their mother's scent to enhance facial recognition. This ability to integrate sensory cues improves significantly between four and twelve months of age. Researchers found that younger infants rely heavily on their mother's scent, while older infants depend more on visual information alone. The findings highlight the importance of multisensory exposure for early cognitive development.

Deepfake Porn Is Leading to a New Protection Industry



It’s horrifyingly easy to make deepfake pornography of anyone thanks to today’s generative AI tools. A 2023 report by Home Security Heroes (a company that reviews identity-theft protection services) found that it took just one clear image of a face and less than 25 minutes to create a 60-second deepfake pornographic video—for free.

The world took notice of this new reality in January when graphic deepfake images of Taylor Swift circulated on social media platforms, with one image receiving 47 million views before it was removed. Others in the entertainment industry, most notably Korean pop stars, have also seen their images taken and misused—but so have people far from the public spotlight. There’s one thing that virtually all the victims have in common, though: According to the 2023 report, 99 percent of victims are women or girls.

This dire situation is spurring action, largely from women who are fed up. As one startup founder, Nadia Lee, puts it: “If safety tech doesn’t accelerate at the same pace as AI development, then we are screwed.” While there’s been considerable research on deepfake detectors, they struggle to keep up with deepfake generation tools. What’s more, detectors help only if a platform is interested in screening out deepfakes, and most deepfake porn is hosted on sites dedicated to that genre.

“Our generation is facing its own Oppenheimer moment,” says Lee, CEO of the Australia-based startup That’sMyFace. “We built this thing”—that is, generative AI—”and we could go this way or that way with it.” Lee’s company is first offering visual-recognition tools to corporate clients who want to be sure their logos, uniforms, or products aren’t appearing in pornography (think, for example, of airline stewardesses). But her long-term goal is to create a tool that any woman can use to scan the entire Internet for deepfake images or videos bearing her own face.

“If safety tech doesn’t accelerate at the same pace as AI development, then we are screwed.” —Nadia Lee, That’sMyFace

Another startup founder had a personal reason for getting involved. Breeze Liu was herself a victim of deepfake pornography in 2020; she eventually found more than 800 links leading to the fake video. She felt humiliated, she says, and was horrified to find that she had little recourse: The police said they couldn’t do anything, and she herself had to identify all the sites where the video appeared and petition to get it taken down—appeals that were not always successful. There had to be a better way, she thought. “We need to use AI to combat AI,” she says.

Liu, who was already working in tech, founded Alecto AI, a startup named after a Greek goddess of vengeance. The app she’s building lets users deploy facial recognition to check for wrongful use of their own image across the major social media platforms (she’s not considering partnerships with porn platforms). Liu aims to partner with the social media platforms so her app can also enable immediate removal of offending content. “If you can’t remove the content, you’re just showing people really distressing images and creating more stress,” she says.

Liu says she’s currently negotiating with Meta about a pilot program, which she says will benefit the platform by providing automated content moderation. Thinking bigger, though, she says the tool could become part of the “infrastructure for online identity,” letting people check also for things like fake social media profiles or dating site profiles set up with their image.

Can Regulations Combat Deepfake Porn?

Removing deepfake material from social media platforms is hard enough—removing it from porn platforms is even harder. To have a better chance of forcing action, advocates for protection against image-based sexual abuse think regulations are required, though they differ on what kind of regulations would be most effective.

Susanna Gibson started the nonprofit MyOwn after her own deepfake horror story. She was running for a seat in the Virginia House of Delegates in 2023 when the official Republican party of Virginia mailed out sexual imagery of her that had been created and shared without her consent, including, she says, screenshots of deepfake porn. After she narrowly lost the election, she devoted herself to leading the legislative charge in Virginia and then nationwide to fight back against image-based sexual abuse.

“The problem is that each state is different, so it’s a patchwork of laws. And some are significantly better than others.” —Susanna Gibson, MyOwn

Her first win was a bill that the Virginia governor signed in April to expand the state’s existing “revenge porn” law to cover more types of imagery. “It’s nowhere near what I think it should be, but it’s a step in the right direction of protecting people,” Gibson says.

While several federal bills have been introduced to explicitly criminalize the nonconsensual distribution of intimate imagery or deepfake porn in particular, Gibson says she doesn’t have great hopes of those bills becoming the law of the land. There’s more action at the state level, she says.

“Right now there are 49 states, plus D.C., that have legislation against nonconsensual distribution of intimate imagery,” Gibson says.But the problem is that each state is different, so it’s a patchwork of laws. And some are significantly better than others.” Gibson notes that almost all of the laws require proof that the perpetrator acted with intent to harass or intimidate the victim, which can be very hard to prove.

Among the different laws, and the proposals for new laws, there’s considerable disagreement about whether the distribution of deepfake porn should be considered a criminal or civil matter. And if it’s civil, which means that victims have the right to sue for damages, there’s disagreement about whether the victims should be able to sue the individuals who distributed the deepfake porn or the platforms that hosted it.

Beyond the United States is an even larger patchwork of policies. In the United Kingdom, the Online Safety Act passed in 2023 criminalized the distribution of deepfake porn, and an amendment proposed this year may criminalize its creation as well. The European Union recently adopted a directive that combats violence and cyberviolence against women, which includes the distribution of deepfake porn, but member states have until 2027 to implement the new rules. In Australia, a 2021 law made it a civil offense to post intimate images without consent, but a newly proposed law aims to make it a criminal offense, and also aims to explicitly address deepfake images. South Korea has a law that directly addresses deepfake material, and unlike many others, it doesn’t require proof of malicious intent. China has a comprehensive law restricting the distribution of “synthetic content,” but there’s been no evidence of the government using the regulations to crack down on deepfake porn.

While women wait for regulatory action, services from companies like Alecto AI and That’sMyFace may fill the gaps. But the situation calls to mind the rape whistles that some urban women carry in their purses so they’re ready to summon help if they’re attacked in a dark alley. It’s useful to have such a tool, sure, but it would be better if our society cracked down on sexual predation in all its forms, and tried to make sure that the attacks don’t happen in the first place.


❌