Innovation is an abstract concept, but UC Berkeley is a premier research institution, leaving it a hub for innovation. We decided to highlight some of the work occurring right in our backyard.
Alpha60
Alpha60 is a data scraper and geospatialization tool pioneered on the UC Berkeley campus. The tool tracks the number of people uploading and downloading files to BitTorrent, a peer-to-peer file sharing protocol. It essentially quantifies and maps data on global unauthorized media access.
According to Abigail de Kosnik, project director and an assistant professor in the Berkeley Center for New Media and the Department of Theater, Dance and Performance Studies, alpha60 works as a ratings system for TV piracy. The tool can show the spread of pirate activity of a given TV show and can show the shape of “pirate archipelagos,” or physical sites where illicit media sharing occurs.
According to de Kosnik, because traditional TV ratings systems (such as the Nielsen system) do not take into account people watching from unofficial platforms, their numbers are not completely accurate. One missing piece from alpha60, however, is the ability to track pirate streaming activity.
The tool takes its name from the 1965 French science-noir film “Alphaville,” in which there is a supercomputer called “alpha 60,” a sentient system that rules over the city. The alpha60 team is led by de Kosnik, her husband — computer scientist Benjamin de Kosnik — and campus alumnus Jingyi Li.
— Camryn Bell
“Naked to the Sky”
“In live music performances, we often forget that there are humans behind the instruments. We expect perfect sound, but this doesn’t reflect the imperfect nature of our bodies.”
— Scott Rubin
Scott Rubin is currently engaged in research and compositions interfacing dancers and improvising musicians by using motion sensors worn by dancers to control live digital signal processing. Rubin’s work looks to explore the relationship between music and the body as well as the connections between dance and music, and the analog and digital.
Rubin, a campus doctoral student in the music department, premiered his project “Naked to the Sky” in November 2016 with the Thin Edge New Music Collective in Toronto. The project was later performed in December in UC Berkeley’s Hertz Hall by SanSan Kwan, an associate professor in the Department of Theater, Dance and Performance Studies, dancing with UC Berkeley’s Eco Ensemble.
For this piece, Rubin ran software to map the motions of dancers with motion sensors attached to their arms. The software, written in MaxMSP, maps the data taken from the sensors to control how the musician’s sound is sampled, processed and played back in real time.
“In live music performances, we often forget that there are humans behind the instruments. We expect perfect sound, but this doesn’t reflect the imperfect nature of our bodies. … In my work, I want to focus on the body’s role in acoustic and electronic sound production, and this project was a fundamental step in exploring that idea,” Rubin said in an email.
Rubin is currently working on a project that builds on the foundational concepts of “Naked to the Sky” with a group of performers in Berlin, titled “ironic erratic erotic.”
— Camryn Bell
Danny Goldstein
Cosmology is the study of the large-scale structure and evolution of the universe — what drives the way it changes over time. One of the fundamental goals of the field is to accurately measure the Hubble constant, which parametrizes the expansion history of the universe and changes as a function of time (and thus, distance in space). Doctoral student Danny Goldstein, together with professor Peter Nugent, recently developed a new way of making these measurements for distances too far to observe normally — through gravitationally lensed Type 1a supernovae.
It’s not an easy task. The team collects about 3,000 images of the sky every night with the Palomar Transient Factory.
“We get about 1.5 million detections of variability that are total garbage,” Goldstein explained. “If we didn’t have automated techniques to go through this data and rapidly determine whether something extracted from a difference image is real or bogus, we would be totally hosed.”
Amazingly, despite that volume, the team was able to find a cosmic rarity — a multiply imaged, gravitationally lensed supernova. Because of the nature of general relativity and asymmetric mass distributions in the lensing, the multiple images of the supernova arrive at different times, allowing for highly precise probes of the Hubble constant and other cosmological parameters.
Goldstein thinks the field can go even farther. “What I’m most excited to do with strongly lensed supernova, what I think personally is going to be the coolest thing, is I think we can detect shock breakout of core collapsed supernova — literally the first seconds of the visible transient — which has never been done by using strong lensing before.”
This feat would require the construction of a “lens model” of the foreground galaxy in the time between when two images arrive — a normally half-year task that will require the careful application of machine learning to compete with the speed and accuracy needed to predict when and where on the sky the next image will arrive. But if successful, it would revolutionize our understanding of supernovae as well as the universe’s expansion as a whole.
— Imad Pasha
Kristofer Bouchard
A fundamental goal of neuroscience is to effectively “read” neural signals with external detectors. Applications of this range from physical therapy to prosthetic limb control. Often, the signals picked up of neural activity are both noisy and high dimensional, requiring inventive means of analysis for picking out the relevant features from the noise — features which provide the mapping from external stimuli (e.g., sounds, touch) into neural responses.
“Google is often times satisfied if their algorithms such as deep learning are able to actively predict how many clicks a given ad will garner. … As scientists, we need to demand more of our statistical machine learning methods — in particular, they need to be interpretable.”
— Kristofer Bouchard
Computational and systems neuroscientist Kristofer Bouchard, at Lawrence Berkeley National Laboratory, is working with Dr. Edward Chang at UCSF to explore how deep learning, a form of machine learning, can be used to extract useful neurological signals in an interpretable way.
“Google is often times satisfied if their algorithms such as deep learning are able to actively predict how many clicks a given ad will garner,” Bouchard explained. “As scientists, we need to demand more of our statistical machine learning methods — in particular, they need to be interpretable.”
That is, neuroscientists are not only interested in the predictive power of their models, but also with using them to explore the biological and physical processes that generate the data. That exploration requires a nuanced approach that has only been made possible by the advent of supercomputing.
“When one is un-yolked of the constraints of conventional computing, different types of algorithms and improved methodologies become available,” Bouchard said. “One has to do an architecture search over many different types of neural networks to find the ones that perform best — often over many hundreds and hundreds of networks.”
Ultimately, on-campus supercomputing systems are adequate to handle the data involved in this type of work. Progress is being driven by a union of our increasing understanding of both biology and computational machine learning algorithms — and the two often inform each other. One day, these algorithms might be able to successfully read neural impulses and translate them into action, allowing people with varying forms of paralysis to control their limbs with only their thoughts.
— Imad Pasha
Prabhat
Climate change is one of the — if not the most — pressing issue facing mankind. Given our inability to measure weather conditions all over the planet at all times, scientists like Prabhat use climate simulations to try to predict how earth will change in the coming years.
In particular, Prabhat is interested in extreme weather events such as hurricanes and cyclones, and whether their frequency and intensity will change in the future. Normally, climate simulations have so much data it is only possible to draw out summary statistics, like mean monthly temperature. But Prabhat has developed a software package called TECA, or Toolkit for Extreme Climate Analysis, which can actually identify and track user-input conditions for extreme weather systems. But this only works for systems we know how to quantify and identify.
“Instead of hand-specifying features that make for an extreme weather pattern, what we really want to do is provide examples of tropical cyclones, so positive and negative examples, and then see if a machine learning system can automatically learn what makes a tropical cyclone,” Prabhat explained. Moving forward, he and other scientists at Lawrence Berkeley National Laboratory are attempting to write codes that can determine the underlying atmospheric conditions that precipitate these storms.
This is where machine learning steps in. Prabhat is attempting to implement a method called semi-supervised classification. As he explained, “Perhaps I have labeled examples for a few well-categorized, well-known events, such as tropical cyclones, but there might be other events for which the climate science community does not have a definition, or there is a pattern which has not yet been labeled. We now have semi-supervised architectures working on such large data sets, and we’re starting to explore whether these architectures are actually able to discover new weather patterns which previously maybe had not been discovered.”
Our ability to classify and characterize these weather patterns is critical, as extreme weather events pose a greater threat to human life and property in the upcoming years than the overall, mean temperature change year to year — if hurricanes and other devastating storms are going to get stronger and more frequent, we need to know and prepare.
— Imad Pasha