Person attributes
Academic attributes
Other attributes
Hava Siegelmann is a computer scientist, researcher, and program manager for the Information Innovation Office (I2O) of the Defense Advanced Research Projects Agency (DARPA). Siegelmann's work with DARPA focuses on advancing the intelligence of computerized devices through their Guaranteeing AI Robustness Against Deception (GARD) program, and their Lifelong Learning Machines (L2M) program. Siegelmann's scientific research is primarily focused on creating biologically inspired computational systems capable of exhibiting intelligent behavior.
Hava Siegelmann attended the Israel Institute of Technology from 1984 to 1988, and graduated with a bachelor of arts degree in computer science.
Hava Siegelmann attended The Hebrew University from 1991 to 1992 where she completed a master of science degree in computer science. For her masters thesis, Siegelmann published a paper in 1992 titled "Document Allocation in Multiprocessor Information Retrieval Systems: An Application of Genetic Algorithms".
Hava Siegelmann attended Rutgers University from 1991 to 1993 where she completed her Ph.D in computer science. Her Ph.D thesis was titled "Foundations of Recurrent Neural Networks" and was published in the journal Science in October of 1993, and was written under the direction of Professor Eduardo D. Sontag.
From 1994 to 2000 (6 years), Hava Siegelmann served as the head of information systems engineering for the Israel Institute of Technology.
From 2001 to 2001 (1 year), Hava Siegelmann was an assistant professor at the Massachusetts Institute of Technology.
From 2008 to 2010 (2 years), Hava Siegelmann was a researcher at Harvard University. During her time at Harvard University as a researcher, Siegelmann researched evolutionary dynamics with applications to cellular biology.
In 2001 (to present), Hava Siegelmann has been serving as a professor at the University of Massachusetts, and a Core Member of their Neuroscience and Behavior Program. She is also the director of the Biologically Inspired Neural and Dynamical Systems (BINDS) laboratory at the University of Massachusetts Amherst, where she runs computational research on memory, circadian systems, cancer, and neurodegenerative diseases.
In July 2016 joined the Defense Advanced Research Projects Agency (DARPA) as the program manager for their information innovation office (I20). Her role is to develop programs for advancing the intelligence of computerized devices, life-long learning machines, context-aware adaptivity, and user-centered applications.
In 2019, Siegelmann also created the Guaranteeing AI Robustness Against Deception (GARD) program with DARPA. The GARD program was created to research the vulnerability of machine learning (ML) platforms, and develop secure ML platforms by making them less vulnerable to adversarial deception attacks. Siegelmann made the following comments regarding the purpose of the GARD program:
The GARD program seeks to prevent the chaos that could ensue in the near future when attack methodologies, now in their infancy, have matured to a more destructive level. We must ensure ML is safe and incapable of being deceived. The kind of broad scenario-based defense we’re looking to generate can be seen, for example, in the immune system, which identifies attacks, wins and remembers the attack to create a more effective response during future engagements.
Hava Siegelmann also oversees the Lifelong Learning Machines (L2M) program for DARPA that launched in 2019. The L2M program focuses on two primary areas: developing computational frameworks for applying learned lessons from new data or circumstances, and to find applicable examples of how biological systems naturally improve and adapt to their environments. Siegelmann made the following comments regarding the L2M program:
Life has had billions of years to develop approaches for learning from experience. There are almost certainly some secrets there that can be applied to machines so they can be not just computational tools to help us solve problems but responsive and adaptive collaborators.
Hava Siegelmann acts as a professional scientist that reviews submissions to the following scientific journals: Journal of Theoretical Biology, Neural Computation, Theoretical Computer Science, J. of Complexity, Neural networks World, Neural Networks, Connection Science, Cognitive Science, IEEE Trans on Neural Networks, and Physics Review letters. She is also the Associate Editor of Frontiers in Computational Neuroscience, and an editorial board member of the American Institute of Physics Journal Chaos: An Interdisciplinary Journal of Nonlinear Science.
In 2016, Hava Siegelmann was awarded the Hebb Award from the International Neural Network Society. She won the award for her contributions to biological learning, and it was presented to her in July 2016 at the World Conference on Computational Intelligence in Vancouver, British Columbia, Canada.
Hava Siegelmann is one of the co-creators of support vector clustering; a popular clustering algorithm used in industry applications.
Hava Siegelmann is the creator of Super-Turing computation theory. Her theory was published in Science in 1993 for her Ph.D thesis at Rutgers University. The thesis was titled "Foundations of Recurrent Neural Networks. She would later published a book on her theory titled "Neural networks and analog computation: Beyond the Turing Limit" in 1998. Siegelmann came up with the Super-Turing computational theory after re-reading the works of the creator of the Turing model, Alan Turing, and attributes her success building her theory to being young and curious:
I was young enough to be curious, wanting to understand why the Turing model looked really strong. I tried to prove the conjecture that neural networks are very weak and instead found that some of the early work was faulty. I was surprised to find out via mathematical analysis that the neural models had some capabilities that surpass the Turing model. So I re-read Turing and found that he believed there would be an adaptive model that was stronger based on continuous calculations.
Her theory details an adaptive computational system that learns and evolves as is executes using neural networks. When describing what her Super-Turing computational model offers, Siegelmann says:
Each time a Super-Turing machine gets input it literally becomes a different machine,” Siegelmann says. “You don’t want this for your PC. They are fine and fast calculators and we need them to do that. But if you want a robot to accompany a blind person to the grocery store, you’d like one that can navigate in a dynamic environment. If you want a machine to interact successfully with a human partner, you’d like one that can adapt to idiosyncratic speech, recognize facial patterns and allow interactions between partners to evolve just like we do. That’s what this model can offer.
Computational models developed using the Super-Turing computational model exhibit a possible behaviors, which is much greater than the computational models built using the original Turing model. For example, if a machine built using the Turing model was made to have 500 distinct behaviors, a machine built using the Super-Turing computational model based on the same 500 behaviors would have 2500 possible behaviors.