Home Blog

EPFL architecture students publish their designs in print

0

EPFL’s first-year architecture students usually have the chance to build full-scale versions of their designs. But this year’s cohort will instead publish a book featuring blueprints for structures along the banks of the Rhône and Arve rivers in Geneva

For first-year architecture students at EPFL, the 2019–2020 academic year has offered a salutary lesson in one of the key challenges they can expect to face in their future careers: adapting to circumstances beyond their control. 

At the end of each academic year, EPFL students traditionally team up to build a wooden structure (see links below). This year’s cohort was planning to work on two sites in Geneva’s Jonction district: one on the bank of the Rhône (dubbed “The Mangrove” by the teachers), and a second by the Arve (named “The Galpon” after the nearby theater). Working with Geneva’s utility company (SIG) and Geneva Canton, the students were supposed to be helping the site managers devise ways to unlock the district’s full potential.

But when the COVID-19 crisis struck, the up-and-coming architects had to abandon their plans to build wooden models and life-size structures, settling instead for digital versions of their designs (see this article, published on 8 April 2020). But teachers from EPFL’s Design Studio on the Conception of Space (ALICE) came up with the idea of showcasing the students’ work in a 900-page book. Entitled The Real Book, it will be published this fall and can be seen online here. “We wanted to turn all that digital work into something tangible,” says Daniel Zamarbide, who co-directs EPFL’s ALICE lab. “And this year, with all its ups and downs, has somehow felt very real. That’s why we’ve called it The Real Book, as a play on words with ‘year book’.”

Making the virtual real
For the students, compiling the book made up for the missed opportunity to build full-scale versions of their designs. “In addition to working on our drawings, I wrote a narrative capturing how the space we’d designed together made me feel,” says student Anna Hausel. “The book is a physical manifestation of all the virtual work we’ve done. It meant a lot to be able to share my thoughts and feelings.” Hausel was a member of the Niederlani studio, a team of 31 students whose structure featured a series of walkways and movable fences, designed to act as a link between the city of Geneva and other, nearby developments along the banks of the Rhône. At its center sits a room that is shielded and detached from the surrounding environment. “The lockdown definitely influenced our design, albeit subconsciously,” adds Hausel. “You can move around the structure, but the central hub is closed off from the outside world.”

Fellow first-year student Sacha Toupance was part of the Maréchal team that worked on plans for a walled garden alongside the Rhône. Toupance says the book idea couldn’t have come at a better time. “We had to sift through a mountain of files and choose the best ones to include,” he explains. “We also developed a narrative theme that runs through the book. That in itself was an exercise in concision.” With around 100 first-year students contributing to the book, it took serious coordination – and several impromptu, late-night video conferences – to keep the designs consistent, such as making sure there were no staircases passing through ceilings. The students will be rightfully proud to showcase the results of their endeavors when The Real Book is launched this fall.

More than a book
The Real Book is divided into ten sections, each focusing on a specific theme – from the history of the location to the water, gradient and vegetation features of each site. And even though the wooden prototypes may never see the light of day, there’s also a chapter on the logistical challenges of building prefabricated structures by the water’s edge. The ALICE team called in graphic designer Roman Karrer to work on the book. Containing drawings, collages, watercolors and photos taken by the students, The Real Book is more than just a book. It’s a symbolic creation in its own right. 

This Article Orginally published in EPFL

Universal musical harmony | MIT News

0

Many forms of Western music make use of harmony, or the sound created by certain pairs of notes. A longstanding question is why some combinations of notes are perceived as pleasant while others sound jarring to the ear. Are the combinations we favor a universal phenomenon? Or are they specific to Western culture?

Through intrepid research trips to the remote Bolivian rainforest, researchers in the McDermott lab at the McGovern Institute for Brain Research have found that aspects of the perception of note combinations may be universal, even though the aesthetic evaluation of note combination as pleasant or unpleasant is culture-specific.

“Our work has suggested some universal features of perception that may shape musical behavior around the world,” says MIT associate professor of brain and cognitive sciences Josh McDermott, who is also an associate investigator at the McGovern Institute and senior author of the Nature Communications study. “But it also indicates the rich interplay with cultural influences that give rise to the experience of music.”

Remote learning

Questions about the universality of musical perception are difficult to answer, in part because of the challenge in finding people with little exposure to Western music. McDermott, who is also an investigator in the Center for Brains, Minds, and Machines, has found a way to address this problem. His colleagues have performed a series of studies with the participation of an Indigenous population, the Tsimane’, who live in relative isolation from Western culture and have had little exposure to Western music. Accessing the Tsimane’ villages is challenging, as they are scattered throughout the rainforest and only reachable during the dry part of the year.

“When we enter a village there is always a crowd of curious children to greet us,” says Malinda McPherson, a graduate student in the lab and lead author of the study. “Tsimane’ are friendly and welcoming, and we have visited some villages several times, so now many people recognize us.”

In a study published in 2019, McDermott’s team found evidence that the brain’s ability to detect musical octaves is not universal, but is gained through cultural experience. And in 2016 they published findings suggesting that the preference for consonance over dissonance is culture-specific. In their new study, the team decided to explore whether aspects of the perception of consonance and dissonance might nonetheless be universally present across cultures.

Music lessons

In Western music, harmony is the sound of two or more notes heard simultaneously. Think of the Leonard Cohen song, “Hallelujah,” where he sings about harmony (“the fourth, the fifth, the minor fall and the major lift”). A combination of two notes is called an interval, and intervals that are perceived to be the most pleasant (or consonant, like the fourth and the fifth, for example) to the Western ear are generally represented by smaller integer ratios.

“Intervals that are related by low integer ratios have fascinated scientists for centuries,” McPherson explains. “Such intervals are central to Western music, but are also believed to be a common feature of many musical systems around the world. So intervals are a natural target for cross-cultural research, which can help identify aspects of perception that are and aren’t independent of cultural experience.”

Scientists have been drawn to low integer ratios in music, in part because they relate to the frequencies in voices and many instruments, known as overtones. Overtones from sounds like voices form a particular pattern known as the harmonic series. As it happens, the combination of two concurrent notes related by a low integer ratio partially reproduces this pattern. Because the brain presumably evolved to represent natural sounds, such as voices, it has seemed plausible that intervals with low integer ratios might have special perceptual status across cultures.

Since the Tsimane’ do not generally sing or play music together, meaning they have not been trained to hear or sing in harmony, McPherson and her colleagues were presented with a unique opportunity to explore whether there is anything universal about the perception of musical intervals.

Taking notes

In order to probe the perception of musical intervals, McDermott and colleagues took advantage of the fact that ears accustomed to Western musical harmony often have difficulty picking apart two “consonant” notes when they are played at the same time. This auditory confusion is known as “fusion” in the field. By contrast, two “dissonant” notes are easier to hear as separate.

The tendency of “consonant” notes to be heard by Westerners as fused could reflect their common occurrence in Western music. But it could also be driven by the resemblance of low-integer-ratio note combinations to the harmonic series. This similarity of consonant intervals to the acoustic structure of typical natural sounds raises the possibility that the human brain is biologically tuned to “fuse” consonant notes.

To explore this question, the team ran identical sets of experiments on two participant groups: U.S. non-musicians residing in the Boston metropolitan area and Tsimane’ residing in villages in the Amazon rainforest. Listeners heard two concurrent notes separated by a particular musical interval (consonant or dissonant), and were asked to judge whether they heard one or two sounds. The experiment was performed with both synthetic and natural sounds.

They found that, like the Boston cohort, the Tsimane’ were more likely to mistake two notes as a single sound if they were consonant than if they were dissonant.

“I was surprised by how similar some of the results in Tsimane’ participants were to those in U.S. participants,” says McPherson, “particularly given the striking differences that we consistently see in preferences for musical intervals.”

When it came to whether consonant intervals were more pleasant than dissonant intervals, the results told a very different story. While the U.S. study participants found consonant intervals more pleasant than dissonant intervals, the Tsimane’ showed no preference, implying that our sense of what is pleasant is shaped by culture.

“The fusion results provide an example of a perceptual effect that could influence musical systems, for instance by creating a natural perceptual contrast to exploit,” explains McDermott. “Hopefully our work helps to show how one can conduct rigorous perceptual experiments in the field and learn things that would be hidden if we didn’t consider populations in other parts of the world.”

This Article Originally Published in MIT NEWS

Quantum fluctuations can jiggle objects on the human scale

0

The universe, as seen through the lens of quantum mechanics, is a noisy, crackling space where particles blink constantly in and out of existence, creating a background of quantum noise whose effects are normally far too subtle to detect in everyday objects.

Now for the first time, a team led by researchers at MIT LIGO Laboratory has measured the effects of quantum fluctuations on objects at the human scale. In a paper published today in Nature, the researchers report observing that quantum fluctuations, tiny as they may be, can nonetheless “kick” an object as large as the 40-kilogram mirrors of the U.S. National Science Foundation’s Laser Interferometer Gravitational-wave Observatory (LIGO), causing them to move by a tiny degree, which the team was able to measure. 

It turns out the quantum noise in LIGO’s detectors is enough to move the large mirrors by 10-20 meters — a displacement that was predicted by quantum mechanics for an object of this size, but that had never before been measured.

“A hydrogen atom is 10-10 meters, so this displacement of the mirrors is to a hydrogen atom what a hydrogen atom is to us — and we measured that,” says Lee McCuller, a research scientist at MIT’s Kavli Institute for Astrophysics and Space Research.

The researchers used a special instrument that they designed, called a quantum squeezer, to “manipulate the detector’s quantum noise and reduce its kicks to the mirrors, in a way that could ultimately improve LIGO’s sensitivity in detecting gravitational waves,” explains Haocun Yu, a physics graduate student at MIT. 

“What’s special about this experiment is we’ve seen quantum effects on something as large as a human,” says Nergis Mavalvala, the Marble Professor and associate head of the physics department at MIT. “We too, every nanosecond of our existence, are being kicked around, buffeted by these quantum fluctuations. It’s just that the jitter of our existence, our thermal energy, is too large for these quantum vacuum fluctuations to affect our motion measurably. With LIGO’s mirrors, we’ve done all this work to isolate them from thermally driven motion and other forces, so that they are now still enough to be kicked around by quantum fluctuations and this spooky popcorn of the universe.”

Yu, Mavalvala,  and McCuller are co-authors of the new paper, along with graduate student Maggie Tse and Principal Research Scientist Lisa Barsotti at MIT, along with other members of the LIGO Scientific Collaboration.

A quantum kick

LIGO is designed to detect gravitational waves arriving at the Earth from cataclysmic sources millions to billions of light years away. It comprises twin detectors, one in Hanford, Washington, and the other in Livingston, Louisiana. Each detector is an L-shaped interferometer made up of two 4-kilometer-long tunnels, at the end of which hangs a 40-kilogram mirror.

To detect a gravitational wave, a laser located at the input of the LIGO interferometer sends a beam of light down each tunnel of the detector, where it reflects off the mirror at the far end, to arrive back at its starting point. In the absence of a gravitational wave, the lasers should return at the same exact time. If a gravitational wave passes through, it would briefly disturb the position of the mirrors, and therefore the arrival times of the lasers.

Much has been done to shield the interferometers from external noise, so that the detectors have a better chance of picking out the exceedingly subtle disturbances created by an incoming gravitational wave.

Mavalvala and her colleagues wondered whether LIGO might also be sensitive enough that the instrument might even feel subtler effects, such as quantum fluctuations within the interferometer itself, and specifically, quantum noise generated among the photons in LIGO’s laser.

“This quantum fluctuation in the laser light can cause a radiation pressure that can actually kick an object,” McCuller adds. “The object in our case is a 40-kilogram mirror, which is a billion times heavier than the nanoscale objects that other groups have measured this quantum effect in.”

Noise squeezer

To see whether they could measure the motion of LIGO’s massive mirrors in response to tiny quantum fluctuations, the team used an instrument they recently built as an add-on to the interferometers, which they call a quantum squeezer. With the squeezer, scientists can tune the properties of the quantum noise within LIGO’s interferometer.

The team first measured the total noise within LIGO’s interferometers, including the background quantum noise, as well as “classical” noise, or disturbances generated from normal, everyday vibrations. They then turned the squeezer on and set it to a specific state that altered the properties of quantum noise specifically. They were able to then subtract the classical noise during data analysis, to isolate the purely quantum noise in the interferometer. As the detector constantly monitors the displacement of the mirrors to any incoming noise, the researchers were able to observe that the quantum noise alone was enough to displace the mirrors, by 10-20 meter.

Mavalvala notes that the measurement lines up exactly with what quantum mechanics predicts. “But still it’s remarkable to see it be confirmed in something so big,” she says.

Going a step further, the team wondered whether they could manipulate the quantum squeezer to reduce the quantum noise within the interferometer. The squeezer is designed such that when it set to a particular state, it “squeezes” certain properties of the quantum noise, in this case, phase and amplitude. Phase fluctuations can be thought of as arising from the quantum uncertainty in the light’s travel time, while amplitude fluctuations impart quantum kicks to the mirror surface.

“We think of the quantum noise as distributed along different axes, and we try to reduce the noise in some specific aspect,” Yu says.

When the squeezer is set to a certain state, it can for example squeeze, or narrow the uncertainty in phase, while simultaneously distending, or increasing the uncertainty in amplitude. Squeezing the quantum noise at different angles would produce different ratios of phase and amplitude noise within LIGO’s detectors.

The group wondered whether changing the angle of this squeezing would create quantum correlations between LIGO’s lasers and its mirrors, in a way that they could also measure. Testing their idea, the team set the squeezer to 12 different angles and found that, indeed, they could measure correlations between the various distributions of quantum noise in the laser and the motion of the mirrors.

Through these quantum correlations, the team was able to squeeze the quantum noise, and the resulting mirror displacement, down to 70 percent its normal level. This measurement, incidentally, is below what’s called the standard quantum limit, which, in quantum mechanics, states that a given number of photons, or, in LIGO’s case, a certain level of laser power, is expected to generate a certain minimum of quantum fluctuations that would generate a specific “kick” to any object in their path.

By using squeezed light to reduce the quantum noise in the LIGO measurement, the team has made a measurement more precise than the standard quantum limit, reducing that noise in a way that will ultimately help LIGO to detect fainter, more distant sources of gravitational waves.

This research was funded, in part, by the National Science Foundation.

This Article Originally Published in MIT NEWS

TESS mission discovers massive ice giant

0

The “ice giant” planets Neptune and Uranus are much less dense than rocky, terrestrial planets such as Venus and Earth. Beyond our solar system, many other Neptune-sized planets, orbiting distant stars, appear to be similarly low in density.

Now, a new planet discovered by NASA’s Transiting Exoplanet Survey Satellite, TESS, seems to buck this trend. The planet, named TOI-849 b, is the 749th “TESS Object of Interest” identified to date. Scientists spotted the planet circling a star about 750 light years away every 18 hours, and estimate it is about 3.5 times larger than Earth, making it a Neptune-sized planet. Surprisingly, this far-flung Neptune appears to be 40 times more massive than Earth and just as dense.

TOI-849 b is the most massive Neptune-sized planet discovered to date, and the first to have a density that is comparable to Earth.

“This new planet is more than twice as massive as our own Neptune, which is really unusual,” says Chelsea Huang, a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research, and a member of the TESS science team. “Imagine if you had a planet with Earth’s average density, built up to 40 times the Earth’s mass. It’s quite crazy to think what’s happening at the center of a planet with that kind of pressure.”

The discovery is reported today in the journal Nature. The study’s authors include Huang and members of the TESS science team at MIT.

A blasted Jupiter?

Since its launch on April 18, 2018, the TESS satellite has been scanning the skies for planets beyond our solar system. The project is one of NASA’s Astrophysics Explorer missions and is led and operated by MIT. TESS is designed to survey almost the entire sky by pivoting its view every month to focus on a different patch of the sky as it orbits the Earth. As it scans the sky, TESS monitors the light from the brightest, nearest stars, and scientists look for periodic dips in starlight that may signal that a planet is crossing in front of a star.

Data taken by TESS, in the form of a star’s light curve, or measurements of brightness, is first made available to the TESS science team, an international, multi-institute group of researchers led by scientists at MIT. These researchers get a first look at the data to identify promising planet candidates, or TESS Objects of Interest. These are shared publicly with the general scientific community along with the TESS data for further analysis.

For the most part, astronomers focus their search for planets on the nearest, brightest stars that TESS has observed. Huang and her team at MIT, however, recently had some extra time to look over data during September and October of 2018, and wondered if anything could be found among the fainter stars. Sure enough, they discovered a significant number of transit-like dips from a star 750 light years away, and soon after, confirmed the existence of TOI-849 b.

“Stars like this usually don’t get looked at carefully by our team, so this discovery was a happy coincidence,” Huang says.

Follow-up observations of the faint star with a number of ground-based telescopes further confirmed the planet and also helped to determine its mass and density.

Huang says that TOI-849 b’s curious proportions are challenging existing theories of planetary formation.

“We’re really puzzled about how this planet formed,” Huang says. “All the current theories don’t fully explain why it’s so massive at its current location. We don’t expect planets to grow to 40 Earth masses and then just stop there. Instead, it should just keep growing, and end up being a gas giant, like a hot Jupiter, at several hundreds of Earth masses.”

One hypothesis scientists have come up with to explain the new planet’s mass and density is that perhaps it was once a much larger gas giant, similar to Jupiter and Saturn — planets with more massive envelopes of gas that enshroud cores thought to be as dense as the Earth.

As the TESS team proposes in the new study, over time, much of the planet’s gassy envelope may have been blasted away by the star’s radiation — not an unlikely scenario, as TOI-849 b orbits extremely close to its host star. Its orbital period is just 0.765 days, or just over 18 hours, which exposes the planet to about 2,000 times the solar radiation that Earth receives from the sun. According to this model, the Neptune-sized planet that TESS discovered may be the remnant core of a much more massive, Jupiter-sized giant.

“If this scenario is true, TOI-849 b is the only remnant planet core, and the largest gas giant core known to exist,” says Huang. “This is something that gets scientists really excited, because previous theories can’t explain this planet.”

This research was funded, in part, by NASA.

This Article Originally Published in MIT NEWS

What is the Covid-19 data tsunami telling policymakers?

0

Uncertainty about the course of the Covid-19 pandemic continues, with more than 2,500,000 known cases and 126,000 deaths in the United States alone. How to contain the virus, limit its damage, and address the deep-rooted health and racial inequalities it has exposed are now urgent topics for policymakers. Earlier this spring, 300 data scientists and health care professionals from around the world joined the MIT Covid-19 Datathon to see what insights they might uncover.

“It felt important to be a part of,” says Ashley O’Donoghue, an economist at the Center for Healthcare Delivery Science at Beth Israel Deaconess Medical Center. “We thought we could produce something that might make a difference.”

Participants were free to explore five tracks: the epidemiology of Covid-19, its policy impacts, its disparate health outcomes, the pandemic response in New York City, and the wave of misinformation Covid-19 has spawned. After splitting into teams, participants were set loose on 20 datasets, ranging from county-level Covid-19 cases compiled by The New York Times to a firehose of pandemic-related posts released by Twitter. 

The participants, and the dozens of mentors who guided them, hailed from 44 countries and every continent except for Antarctica. To encourage the sharing of ideas and validation of results, the event organizers — MIT Critical Data, MIT Hacking Medicine, and the Martin Trust Center for MIT Entrepreneurship — required that all code be made available. In the end, 47 teams presented final projects, and 10 were singled out for recognition by a panel of judges. Several teams are now writing up their results for peer-reviewed publication, and at least one team has posted a paper.

“It’s really hard to find research collaborators, especially during a crisis,” says Marie-Laure Charpignon, a PhD student with MIT’s Institute for Data, Systems, and Society, who co-organized the event. “We’re hoping that the teams and mentors that found each other will continue to explore these questions.”

In a pre-print on medRxiv, O’Donoghue and her teammates identify the businesses most at risk for seeding new Covid-19 infections in New York, California, and New England. Analyzing location data from SafeGraph, a company that tracks commercial foot traffic, the team built a transmission-risk index for businesses that in the first five months of this year drew the most customers, for longer periods of time, and in more crowded conditions, due to their modest size. 

Comparing this risk index to new weekly infections, the team classified 16.3 percent of countywide businesses as “superspreaders,” most of which were restaurants and hotels. A 1 percent increase in the density of super-spreader businesses, they found, was linked to a 5 percent jump in Covid-19 cases. The team is now extending its analysis to all 50 states, drilling down to ZIP code-level data, and building a decision-support tool to help several hospitals in their sample monitor risk as communities reopen. The tool will also let policymakers evaluate a wide range of statewide reopening policies.

“If we see a second wave of infections, we can determine which policies actually worked,” says O’Donoghue.

The datathon model for collaborative research is the brainchild of Leo Anthony Celi, a researcher at MIT and staff physician at Beth Israel Deaconess Medical Center. The events are usually coffee-fueled weekend affairs. But this one took place over a work week, and amid a global lockdown, with teammates having to meet and collaborate over Slack and Zoom.

With no coffee breaks or meals, they had fewer chances to network, says Celi. But the virtual setting allowed more people to join, especially mentors, who could participate without taking time off to travel. It also may have made teams more efficient, he says. 

After analyzing communication logs from the event, he and his colleagues found evidence that the most-successful teams lacked a clear leader. Everyone seemed to chip in. “In face-to-face events, leaders and followers emerge as they project their expertise and personalities,” he says. “But on Slack, we saw less hierarchy. The most successful teams showed high levels of enthusiasm and conversational turn-taking.”

Another advantage of the virtual setting is that teams straddling several time zones could work, literally, around the clock. “You could post a message on Slack and someone would see it an hour or two later,” says Jane E. Valentine, a biomedical engineer at the Johns Hopkins University Applied Physics Laboratory. “There was a constant sense of engagement. I might be sleeping and doing nothing, but the wheels were still turning.”

Valentine collaborated with a doctor and three data scientists in Europe, the United States, and Canada to analyze anonymized medical data from 4,000 Covid-19 patients to build predictive models for how long a new patient might need to be hospitalized, and their likelihood of dying.

“It’s really useful for a clinician to know if a patient is likely to stabilize or go downhill,” she says. “You may want to monitor or treat them more aggressively.” Hospital administrators can also decide whether to open up additional wards, she adds.

Among their findings, the team found that a fever and shortness of breath were top symptoms for predicting both a long hospital stay and a high risk of death for patients, and that general respiratory symptoms were also a strong predictor of death. Valentine cautions that the results are preliminary, and based on incomplete data that the team is currently working to fill. 

One of the pandemic’s cruel realities is that it has hit the poorest and most vulnerable people in society hardest. Datathon participants also examined Covid-19’s social impact, from analyzing the impact of releasing prisoners to devising tools for people to verify the flood of claims about the disease now circulating online. 

Amber Nigam, a data scientist based in New Delhi, India, has watched conspiracy theories spread and multiply on social media as contagiously as Covid-19 itself. “There’s a lot of anxiety,” he says. “Even my parents have shown me news on WhatsApp and asked if it was true.” 

As the head of AI for PeopleStrong, a predictive sales startup in San Francisco, California, Nigam is comfortable with natural language processing tools and interested in their potential for fighting fake news. During the datathon, he and his team crawled the web for conspiracy theories circulating in the United States, China, and India, among other countries, and used the data to build an automated fact-checker. If the tool finds the claim to be untrue, it sends the reader to the news source where the claim was first debunked. 

“A lot of people in rural settings don’t have access to accurate sources of information,” he says. “It’s super critical for people to have the right facts at their disposal.”

Another team looked at Covid-19’s disparate impact on people of color. Lauren Chambers, a technology fellow at the Massachusetts American Civil Liberties Union (ACLU), suggested the project and mentored the team that took it on. State by state, the team found disproportionate death rates among Black and Hispanic people, who are more likely to work “essential” service-industry jobs where they face greater exposure to people infected with the disease.

The gap was greatest in South Carolina, where Black individuals account for about half of Covid-19 deaths, but only a third of residents. The team noted that the picture nationally is probably worse, given that 10 states still do not collect race-specific data. 

The team also found that poverty and lack of health care access were linked to higher death rates among Black communities, and language barriers were linked to higher death rates among Hispanic individuals. Their findings suggest that economic interventions for Black Americans, and hiring more hospital translators for Hispanic Americans, might be effective policies to reduce inequities in health outcomes.

The ACLU can’t afford to hire an army of data scientists to investigate every civil-rights violation the pandemic has brought to light, says Chambers. But collaborative events like this one give community advocates a chance to explore urgent questions they wouldn’t otherwise be able to, she says, and data scientists get to hear new perspectives, too.

“There’s a dangerous tendency among data scientists to think that numbers are the beginning and end of any good analysis,” she says. “But data are subjective, and there’s all kinds of other expertise that communities hold.”

The event was sponsored by Beth Israel Deaconess Medical Center Innovation Group, Google Cloud, Massachusetts ACLU, and the National Science Foundation’s West Big Data Innovation Hub.

This Article Originally Published in MIT NEWS

Ethics and AI: an unethical optimization principle

0

EPFL professor Anthony Davison and co-authors provide a mathematical basis for concerns about ethical implications of AI.

Artificial intelligence (AI) is increasingly deployed around us and may have large potential benefits. But there are growing concerns about the unethical use of AI. Professor Anthony Davison, who holds the Chair of Statistics at EPFL, and colleagues in the UK, have tackled these questions from a mathematical point of view, focusing on commercial AI that seek to maximize profits.

One example is an insurance company using AI to find a strategy for deciding premiums for potential customers. The AI will choose from many potential strategies, some of which may be discriminatory or may otherwise misuse customer data in ways that later lead to severe penalties for the company. Ideally, unethical strategies such as these would be removed from the space of potential strategies beforehand, but the AI does not have a moral sense, so it cannot distinguish between ethical and unethical strategies.

In work published in Royal Society Open Science on 1 July 2020, Davison and his co-authors Heather Battey (Imperial College London), Nicholas Beale (Sciteb Limited) and Robert MacKay (University of Warwick), show that an AI is likely to pick an unethical strategy in many situations. They formulate their results as an “Unethical Optimization Principle”:

If an AI aims to maximize risk-adjusted return, then under mild conditions it is disproportionately likely to pick an unethical strategy unless the objective function allows sufficiently for this risk.

This principle can help risk managers, regulators or others to detect the unethical strategies that might be hidden in a large strategy space. In an ideal world one would configure the AI to avoid unethical strategies, but this may be impossible because they cannot be specified in advance. In order to guide the use of the AI, the article suggests how to estimate the proportion of unethical strategies and the distribution of the most profitable strategies.

“Our work can be used to help regulators, compliance staff and others to find problematic strategies that might be hidden in a large strategy space. Such a space can be expected to contain disproportionately many unethical strategies, inspection of which should show where problems are likely to arise and thus suggest how the AI search algorithm should be modified to avoid them,” says Professor Davison. “It also suggests that it may be necessary to re-think the way AI operates in very large strategy spaces, so that unethical outcomes are explicitly rejected during the learning process.”

Professor Wendy Hall of the University of Southampton, known worldwide for her work on the potential practical benefits and problems brought by AI, said: “This is a really important paper. It shows that we can’t just rely on AI systems to act ethically because their objectives seem ethically neutral. On the contrary, under mild conditions, an AI system will disproportionately find unethical solutions unless it is carefully designed to avoid them.

The tremendous potential benefits of AI will only be realized properly if ethical behavior is designed in from the ground up, taking account of this Unethical Optimisation Principle from a diverse set of perspectives. Encouragingly, this Principle can also be used to help find ethical problems with existing systems which can then be addressed by better design.”

This Article Orginally published in EPFL

Mapping sugar

0

A new technique makes it possible to image the spatial structure of polysaccharides using a scanning tunnelling microscope

There is a new perspective on sugar. A team of scientists from the Max Planck Institute for Solid State Research and the Max Planck Institute of Colloids and Interfaces has used a scanning tunnelling microscope to image how individual polysaccharide molecules are folded for the first time. The researchers are thus making it possible to investigate the relationship between the spatial structure of polysaccharides such as those found on pathogens and their biological effect.

Sugars under the microscope: a research team including sientists at the Max Planck Institutes for Solid State Research and for Colloids and Interfaces, have developed a method of analysing the three-dimensional structure  – chemists refer to it as the conformation – of individual molecules of polysaccharides with a scanning tunneling microscope, indicated by the tip at the top right of the picture. This produces images such as seen at the bottom of the illustration. The differences in brightness in these images shed light on the spatial arrangement of the individual building blocks in the polysaccharides. The method thus enables the correlation between the structure and biological effect of sugars to be investigated, for example on the surface proteins of viruses (top left). While these sugar structures are still unknown, it can be safely assumed that they will not have the shape of lollipops!

© Xu Wu/MPI for Solid State Research

Function follows form. To a certain extent, the opposite of this guideline mainly followed by Bauhaus designers applies in biology: the form of a biomolecule determines its function. This has long been known about enzymes and other proteins that fulfil their tasks only once they are properly folded. “But until now, it was unclear whether polysaccharides also fold like proteins”, says Peter Seeberger, Director at the Max Planck Institute of Colloids and Interfaces in Potsdam and professor at the Freie Universität Berlin. “We have now found a way to investigate this question”. And indeed: Some polysaccharides such as cellulose do fold, while others such as mannose on the surface of corona viruses or HIV do not.

The sugar specialists led by Peter Seeberger owe their findings on the conformation of polysaccharides to a new method developed by a team led by Klaus Kern, Director at the Max Planck Institute for Solid State Research in Stuttgart. “Scanning tunnelling microscopy allows us to image individual sugar molecules”, says Kern. “Other methods reveal only the average structure of the sample molecules”. But an average structure is of little use, especially for investigations into how the biological function of sugars depends on their form.

After a soft landing, the profiling begins

Branched sugar: the image of a scanning tunneling microscope shows the orientation of the four branches of an elevenfold sugar consisting of two different building blocks (green circles and blue squares).

© MPI for Solid State Research; MPI for Colloids and Interfaces/Nature 2020

When profiling sugar molecules, they must first be deposited on a surface without becoming deformed. Stephan Rauschenbach, now professor at Oxford University, has developed a method at the Max Planck Institute in Stuttgart that makes this possible. For this purpose, the researchers use electrospray ionization to convert the polysaccharides to the gaseous state. They atomise a solution of the respective substance with a fine nozzle while simultaneously ionizing the molecules with a strong electric field. Using a weak electric field, they bring the charged molecules to a soft landing on a copper surface. Then they begin with the actual profiling. Using the tip of a scanning tunnelling microscope, they scan a sugar molecule on the surface. To keep the particle still during this process, they cool the copper surface to almost −270°C. This gives them a profile drawing of the molecule – quite similar to how a pencil can be used to trace the grain of wood.

With the portraits of the sugars in profile alone, the question of the biologically relevant conformations cannot yet be definitively answered. After all, the polysaccharides could take on a different form as a gas than in dissolved or crystalline form. In close cooperation, the teams from Stuttgart and Potsdam are investigating this question as well as how the surface might affect the structure. “However, the images of the individual molecules provide us with initial useful information that helps us to better understand the relationships between the structure and properties of sugars”, says Seeberger.

PH

This Article re-published from MAX-PLANCK-GESELLSCHAFT

New team, new ideas

0

Asifa Akhtar, Ulman Lindenberger and Klaus Blaum are the new Vice Presidents of the Max Planck Society  – From 1 July 2020 the Max Planck Society has three new Vice Presidents: Asifa Akhtar, Ulman Lindenberger and Klaus Blaum are now part of the Executive Board, which advises President Martin Stratmann and prepares important decisions for the Society. They succeed Bill S. Hansson, Angela D. Friederici and Ferdi Schüth, who had performed outstanding services at Martin Stratmann's side for six years.
This Article re-published from MAX-PLANCK-GESELLSCHAFT

How worms move: Dopamine helps nematodes coordinate motor behaviors

0

For a nematode worm, a big lawn of the bacteria that it eats is a great place for it to disperse its eggs so that each hatchling can emerge into a nutritive environment. That’s why when a worm speedily roams about a food patch, it methodically lays its eggs as it goes. A new study by neuroscientists at MIT’s Picower Institute for Learning and Memory investigates this example of action coordination — where egg-laying is coupled to the animal’s roaming — to demonstrate how a nervous system coordinates distinct behavioral outputs. That’s a challenge many organisms face, albeit in different ways, during daily life.

“All animals display a remarkable ability to coordinate their diverse motor programs, but the mechanisms within the brain that allow for this coordination are poorly understood,” note the scientists, including Steven Flavell, Lister Brothers Career Development Assistant Professor in MIT’s Department of Brain and Cognitive Sciences.

Flavell lab members Nathan Cermak, Stephanie Yu, and Rebekah Clark were co-lead authors of the study published this month in eLife.

A new imaging platform

To learn how animals coordinate their motor programs, Flavell’s team invented a new microscopy platform capable of taking sharp, high-frame-rate videos of nematodes for hours or days on end. Guided by custom software, the scope automatically tracks the worms, allowing the researchers to compile information about each animal’s behavior. The team also wrote machine vision software to automatically extract information about each of the C. elegans motor programs — locomotion, feeding, egg-laying, and more — from these videos, yielding a near-comprehensive picture of each animal’s behavioral outputs. Flavell said the scope parts cost about $3,000 and can be assembled in a day or two using the team’s online tutorial. They have posted that and the system’s software online for free. The affordability and flexibility of these microscopes should allow them to be useful for many different applications in the biological sciences.

By using this system and then analyzing the data, Flavell’s team was able to identify for the first time a number of patterns of nematode behavior that involve the coordination of multiple motor actions. Flavell said one insight yielded by the system and the subsequent analysis is that the intensely studied nematodes, known scientifically as C. elegans, have more distinct behavioral states than generally assumed. For example, the study finds that the behavioral state known as “dwelling,” previously defined based on the animal staying put, actually consists of multiple different sub-states that could be readily identified using this new imaging approach.

Behaviors coordinated by dopamine

But one of the most pronounced new behavioral patterns that emerged from the analyses was the observation that worms lay many more eggs while roaming on a food lawn than they do while dwelling. This likely allows animals to thoroughly disperse their eggs across a nutritive environment. The two motor circuits that control locomotion and egg-laying in this animal had been carefully defined by previous work. So, based on their new observation, Flavell’s team decided to investigate how the worm’s nervous system couples locomotion and egg-laying together. It turned out to hinge on the neurotransmitter dopamine, which is abundant in all animals, including humans.

They started out by knocking out genes for various neurotransmitters and other brain-modulating molecules. Many of those candidates, such as serotonin, affected the animal’s behavior in important ways, but did not disrupt this coupling of roaming and egg laying. It was only when the team knocked out a gene called cat-2, which is needed for dopamine production, that the worms no longer increased their egg laying while roaming. Notably, it didn’t affect the pace of egg laying while dwelling, suggesting that the worms without dopamine were still capable of laying eggs normally while engaged in other behavioral states.

The team further confirmed the role of dopamine by taking direct control of dopamine-producing cells using optogenetics, a technology that allows neuron activity to be turned on or off with flashes of light. In these experiments, they learned that acutely shutting down the dopaminergic neurons reduced egg-laying only while animals were in the roaming state, but activating these neurons could drive the animals to start laying eggs, even under circumstances when the pace of egg-laying is normally low.

Next, the team wanted to know where the dopamine that triggers this coordinated response emerges, and when. They engineered worms so that their neurons would glow when they became electrically active, an indication provided by a surge of calcium ions. From those flashes they saw that a particular dopamine-producing neuron called PDE stood out as being especially active as worms roamed across a food lawn, and their activity fluctuated in association with the worms’ motion. It peaked, they saw, just before the worm assumed the posture that precipitates egg laying, but only when the worms were crawling along a bacterial food source. Notably, the neuron has the means — a little hair-like structure called a cilium — to sense food outside the worm’s body. These studies suggested that the PDE neuron integrates the presence of food in the environment with the worm’s own motion, generating an activity pattern that essentially reports how quickly worms are progressing through their nutritive environment. The release of dopamine by this neuron, and potentially others as well, could relay this information to the egg-laying circuit, allowing for coordination between the behaviors.

Flavell’s team also mapped out the neural circuitry downstream of dopamine and found that its effects are mediated by two receptors in the D2 family of dopamine receptors (dop-2 and dop-3). In addition, a set of neurons that utilize the neurotransmitter GABA appear to play a critical role downstream of dopamine release. They hypothesize that the role of dopamine may be to send the signal amid plentiful food and roaming behavior to override GABA’s inhibition of egg laying, allowing this behavior to proceed.

Ultimately, egg laying while roaming was just one example of motor program coupling that the lab chose to dissect. Flavell and co-authors note there are many others, too.

“One thing that excites us about this study is that it’s now easy with this new microscopy platform to simultaneously measure each of the main motor programs generated by this animal. Hopefully, we can start thinking about the full repertoire of behaviors that it generates as a complete, coordinated set,” the scientists say.

The research team notes that recently-developed technologies for whole-brain calcium imaging have opened the possibility of measuring neuronal activity throughout the brains of various animals, including the worm.

“To understand these comprehensive neural imaging datasets, it will be important to consider how they relate to the output of the whole brain: the full repertoire of behavioral outputs that an animal generates” Flavell says.

The paper’s other authors are Yung-Chi Huang and Saba Baskoylu. The National Science Foundation, the National Institutes of Health, the JPB Foundation, and the Brain and Behavior Research Foundation supported the research.

This Article Originally Published in MIT NEWS

MIT Energy Initiative awards eight seed fund grants for early-stage MIT energy research

0

Eight individuals and teams from MIT were recently awarded $150,000 grants through the MIT Energy Initiative (MITEI) Seed Fund Program to support promising novel energy research. 

The highly competitive annual program received a total of 82 proposals from 88 researchers representing 17 departments, labs, and centers at MIT. The applications, which came from a range of disciplines, all aim to help advance a low-carbon energy system and address key climate challenges. 

“The breadth of creative, interdisciplinary research proposals that we received truly reflects the Institute’s increasing focus on curbing the effects of climate change,” says MITEI Director Robert C. Armstrong, the Chevron Professor of Chemical Engineering. He also noted that a large number of proposals focused on energy storage, signifying the central role that these technologies will play in deep decarbonization.

The winning projects will address topics ranging from hurricane-resilient smart grids and zero-emission neighborhoods to new, low-cost batteries for grid-level energy storage.

Building hurricane-resilient smart grids

In 2017, Hurricane Maria left more than 1 million Puerto Ricans without power — many of whom did not have their electricity restored until months later. As stronger hurricanes become increasingly frequent, extreme weather is proving to be a growing critical threat to electric power grids and energy infrastructure. 

First-time seed fund awardees Kerry Emanuel and Saurabh Amin aim to develop a foundational design approach for building hurricane-resilient smart grids. They will combine their expertise in hurricane physics and power system control to develop new strategies that can greatly increase the resilience of power grids and allow for quicker restoration of service.

“The goal is to reduce overall grid damage and avoid prolonged outages after storms by integrating strategic resource allocation and microgrid control strategies,” says Emanuel, the Cecil and Ida Green Professor in the Department of Earth, Atmospheric and Planetary Sciences. 

“Unlike a traditional centralized grid that depends on a reliable supply of bulk power, our design approach accounts for the uncertain failure rates of grid components due to hurricane winds and floods, and leverages the flexibility enabled by distributed energy resources, like reconfigurable microgrids, localized renewable energy, and storage devices,” adds Amin, an associate professor in the Department of Civil and Environmental Engineering and a member of the Laboratory for Information and Decision Systems.

This interdisciplinary research holds promise for advancing the science of climate risk management and helping government agencies and energy utilities work together to develop flexible operational strategies in preparation for future storms.

Biological self-assembly to improve catalysis

According to Ariel Furst, an assistant professor in the Department of Chemical Engineering, 500 gigatons of carbon dioxide (CO2) are expected to be produced from industrial processing and fossil fuels over the next five decades. An important way to reduce the carbon footprint of one of these main emitters — industrial processing — is to transform CO2 into useful products. 

The first step in this transformation process is to reduce CO2 to carbon monoxide through a method such as electrocatalysis. This reaction — in which a small-molecule catalyst interacts with an electrode — can often be imprecise and limited. With this in mind, Furst plans to use her seed fund grant to explore how the specific placement of the small-molecule catalysts affects catalytic efficiency in CO2 reduction.

“We provide a unique perspective to this work by combining the inherent power of biology with these electrocatalytic transformations,” says Furst, who is both a new MIT faculty member and first-time seed fund grant winner. 

She will use self-assembled nanostructures composed of deoxyribonucleic acids (DNA) to control the precise positioning of molecular catalysts on electrode surfaces. This research will allow her to evaluate spatial effects on catalytic efficiency, from which she can extrapolate design parameters that can be applied to other classes of catalysts in the future. 

Rapid material design for solid-state batteries

Another first-time seed fund award team will use their grant to develop an automated synthetic process to speed up the discovery, design, and construction of new ceramic material components for solid-state lithium-ion batteries (SSBs), which have the potential to increase safety and energy efficiency as compared to more conventional liquid-electrolyte batteries.

One of the major challenges with implementing SSBs is the need for a high ceramic manufacturing temperature to make key components, resulting in a high-cost, time-consuming synthesis that doesn’t easily translate into industrially relevant manufacturing. Looking to overcome this obstacle, the team has identified the potential for a low-temperature process to synthesize the ceramic components. 

The interdisciplinary team consists of a material ceramicist, Thomas Lord Associate Professor Jennifer Rupp of the departments of Materials Science and Engineering (DMSE) and Electrical Engineering and Computer Science (EECS); an automation expert, Professor Wojciech Matusik of EECS; and a material informatics expert, Atlantic Richfield Associate Professor of Energy Studies Elsa Olivetti of DMSE .

Leveraging their distinct expertise, the research team will work with students to couple machine learning techniques and automated synthesis to revise ceramic processing and enable rapid material screening, device design, and data analysis for performance engineering. 

“This work has the potential to fundamentally alter the way research is conducted in the battery community,” says Rupp. “The higher throughput pathway will allow more discoveries to be made in less time and will enable researchers to focus on altering battery design toward performance.”

The MITEI Seed Fund Program has supported 185 early-stage energy research projects through a total of $24.9 million in grants since its establishment in 2008. This funding comes primarily from MITEI’s founding and sustaining members, supplemented by gifts from generous donors.

Recipients of the 2020 MITEI Seed Fund grants are as follows:

  • “Building Hurricane-Resilient Smart Grids: Optimal Resource Allocation and Microgrid Operation” — Kerry Emanuel of the Department of Earth, Atmospheric and Planetary Sciences and Saurabh Amin of the Department of Civil and Environmental Engineering;
  • “DNA Nanostructure-Immobilized Electrocatalysts for Improved CO2 Reduction Efficiency” — Ariel Furst of the Department of Chemical Engineering;
  • “Enabling High-Energy Li/Li-Ion Batteries Through Active Interface Repair” — Betar Gallant of the Department of Mechanical Engineering;
  • “Extremely Low-Cost Aluminum-Sulfur Battery Running Below 100 Degrees Celsius for Grid-Level Energy Storage” — Donald Sadoway of the Department of Materials Science and Engineering;
  • “Low-Cost Negative Emissions From Concentration Swing Absorption” — Jeffrey Grossman of the Department of Materials Science and Engineering;
  • “Rapid Material Discovery for Solid-State Batteries: Coupling Low-Cost Processing With Material Screening and Performance Optimization Using Machine Learning” — Jennifer Rupp of the Department of Materials Science and Engineering, Wojciech Matusik of the Department of Electrical Engineering and Computer Science, and Elsa Olivetti of the Department of Materials Science and Engineering;
  • “Sorption Enhanced Steam Methane Reforming With Molten Sorbents for Clean Hydrogen Production” — T. Alan Hatton of the Department of Chemical Engineering; and
  • “Towards Zero-Emissions Neighborhoods: A Novel Building-Grid Optimization Framework” — Audun Botterud of the Laboratory for Information and Decision Systems and Christoph Reinhart of the Department of Architecture.

This Article Originally Published in MIT NEWS