Log in
Enquire now
Hava Siegelmann

Hava Siegelmann

A computer scientist, researcher, and program manager for the Information Innovation Office (I2O) of the Defense Advanced Research Projects Agency (DARPA) where she oversees their Guaranteeing AI Robustness Against Deception (GARD) program and their Lifelong Learning Machines (L2M) program.

OverviewStructured DataIssuesContributors

Contents

Is a
Academic
Academic
Person
Person

Person attributes

Birthdate
August 23, 1964
Birthplace
Haifa
Haifa
Nationality
Israel
Israel
Educated at
Rutgers, the State University of New Jersey
Rutgers, the State University of New Jersey
Technion – Israel Institute of Technology
Technion – Israel Institute of Technology
Hebrew University of Jerusalem
Hebrew University of Jerusalem
Occupation
Computer scientist
Computer scientist
Scientist
Scientist

Academic attributes

Doctoral Advisor
Eduardo D. Sontag
Eduardo D. Sontag

Other attributes

Citizenship
United States
United States
Known for
Hypercomputation
Wikidata ID
Q14243

Hava Siegelmann is a computer scientist, researcher, and program manager for the Information Innovation Office (I2O) of the Defense Advanced Research Projects Agency (DARPA). Siegelmann's work with DARPA focuses on advancing the intelligence of computerized devices through their Guaranteeing AI Robustness Against Deception (GARD) program, and their Lifelong Learning Machines (L2M) program. Siegelmann's scientific research is primarily focused on creating biologically inspired computational systems capable of exhibiting intelligent behavior.

Education
Israel Institute of Technology

Hava Siegelmann attended the Israel Institute of Technology from 1984 to 1988, and graduated with a bachelor of arts degree in computer science.

The Hebrew University

Hava Siegelmann attended The Hebrew University from 1991 to 1992 where she completed a master of science degree in computer science. For her masters thesis, Siegelmann published a paper in 1992 titled "Document Allocation in Multiprocessor Information Retrieval Systems: An Application of Genetic Algorithms".

Rutgers University

Hava Siegelmann attended Rutgers University from 1991 to 1993 where she completed her Ph.D in computer science. Her Ph.D thesis was titled "Foundations of Recurrent Neural Networks" and was published in the journal Science in October of 1993, and was written under the direction of Professor Eduardo D. Sontag.

Career
Israel Institute of Technology

From 1994 to 2000 (6 years), Hava Siegelmann served as the head of information systems engineering for the Israel Institute of Technology.

Massachusetts Institute of Technology

From 2001 to 2001 (1 year), Hava Siegelmann was an assistant professor at the Massachusetts Institute of Technology.

Harvard University

From 2008 to 2010 (2 years), Hava Siegelmann was a researcher at Harvard University. During her time at Harvard University as a researcher, Siegelmann researched evolutionary dynamics with applications to cellular biology.

University of Massachusetts

In 2001 (to present), Hava Siegelmann has been serving as a professor at the University of Massachusetts, and a Core Member of their Neuroscience and Behavior Program. She is also the director of the Biologically Inspired Neural and Dynamical Systems (BINDS) laboratory at the University of Massachusetts Amherst, where she runs computational research on memory, circadian systems, cancer, and neurodegenerative diseases.

Defense Advanced Research Projects Agency (DARPA)

In July 2016 joined the Defense Advanced Research Projects Agency (DARPA) as the program manager for their information innovation office (I20). Her role is to develop programs for advancing the intelligence of computerized devices, life-long learning machines, context-aware adaptivity, and user-centered applications.

In 2019, Siegelmann also created the Guaranteeing AI Robustness Against Deception (GARD) program with DARPA. The GARD program was created to research the vulnerability of machine learning (ML) platforms, and develop secure ML platforms by making them less vulnerable to adversarial deception attacks. Siegelmann made the following comments regarding the purpose of the GARD program:

The GARD program seeks to prevent the chaos that could ensue in the near future when attack methodologies, now in their infancy, have matured to a more destructive level. We must ensure ML is safe and incapable of being deceived. The kind of broad scenario-based defense we’re looking to generate can be seen, for example, in the immune system, which identifies attacks, wins and remembers the attack to create a more effective response during future engagements.

Hava Siegelmann also oversees the Lifelong Learning Machines (L2M) program for DARPA that launched in 2019. The L2M program focuses on two primary areas: developing computational frameworks for applying learned lessons from new data or circumstances, and to find applicable examples of how biological systems naturally improve and adapt to their environments. Siegelmann made the following comments regarding the L2M program:

Life has had billions of years to develop approaches for learning from experience. There are almost certainly some secrets there that can be applied to machines so they can be not just computational tools to help us solve problems but responsive and adaptive collaborators.
Editorial roles

Hava Siegelmann acts as a professional scientist that reviews submissions to the following scientific journals: Journal of Theoretical Biology, Neural Computation, Theoretical Computer Science, J. of Complexity, Neural networks World, Neural Networks, Connection Science, Cognitive Science, IEEE Trans on Neural Networks, and Physics Review letters. She is also the Associate Editor of Frontiers in Computational Neuroscience, and an editorial board member of the American Institute of Physics Journal Chaos: An Interdisciplinary Journal of Nonlinear Science.

Achievements and awards
Hebb Award of the International Neural Network Society

In 2016, Hava Siegelmann was awarded the Hebb Award from the International Neural Network Society. She won the award for her contributions to biological learning, and it was presented to her in July 2016 at the World Conference on Computational Intelligence in Vancouver, British Columbia, Canada.

Co-originated Support Vector Clustering

Hava Siegelmann is one of the co-creators of support vector clustering; a popular clustering algorithm used in industry applications.

Super-Turing computation theory

Hava Siegelmann is the creator of Super-Turing computation theory. Her theory was published in Science in 1993 for her Ph.D thesis at Rutgers University. The thesis was titled "Foundations of Recurrent Neural Networks. She would later published a book on her theory titled "Neural networks and analog computation: Beyond the Turing Limit" in 1998. Siegelmann came up with the Super-Turing computational theory after re-reading the works of the creator of the Turing model, Alan Turing, and attributes her success building her theory to being young and curious:

I was young enough to be curious, wanting to understand why the Turing model looked really strong. I tried to prove the conjecture that neural networks are very weak and instead found that some of the early work was faulty. I was surprised to find out via mathematical analysis that the neural models had some capabilities that surpass the Turing model. So I re-read Turing and found that he believed there would be an adaptive model that was stronger based on continuous calculations.

Her theory details an adaptive computational system that learns and evolves as is executes using neural networks. When describing what her Super-Turing computational model offers, Siegelmann says:

Each time a Super-Turing machine gets input it literally becomes a different machine,” Siegelmann says. “You don’t want this for your PC. They are fine and fast calculators and we need them to do that. But if you want a robot to accompany a blind person to the grocery store, you’d like one that can navigate in a dynamic environment. If you want a machine to interact successfully with a human partner, you’d like one that can adapt to idiosyncratic speech, recognize facial patterns and allow interactions between partners to evolve just like we do. That’s what this model can offer.

Computational models developed using the Super-Turing computational model exhibit a possible behaviors, which is much greater than the computational models built using the original Turing model. For example, if a machine built using the Turing model was made to have 500 distinct behaviors, a machine built using the Super-Turing computational model based on the same 500 behaviors would have 2500 possible behaviors.

Timeline

No Timeline data yet.

Current Employer

Patents

Further Resources

Title
Author
Link
Type
Date

'Super-Turing' machine learns and evolves | Kurzweil

https://kurzweilai.net/

https://www.kurzweilai.net/super-turing-machine-learns-and-evolves

Web

April 9, 2012

'We Paid Little Attention to Vulnerabilities in Machine Learning Platforms': DARPA - The Sociable

Tim Hinchcliffe

https://sociable.co/technology/vulnerabilities-machine-learning-darpa/

Web

February 19, 2019

A generic Approach for Identification of Event Related Brain Potentials via a Competative Neural Network Structure

Daniel Lange, Hava Siegelmann, Hillel Pratt, Gideon Inbar

https://binds.cs.umass.edu/papers/1997_Lange_NIPS.pdf

Journal

A Multi-Agent System that Attains Longevity via Death

Megan Olsen, Hava T Siegelmann

https://binds.cs.umass.edu/papers/2007_Olsen_NESCAI.pdf

Journal

A Support Vector Method for Clustering

Asa Ben-Hur, David Horn, Hava T Siegelmann, Vladimir Vapnik

https://binds.cs.umass.edu/papers/2000_Ben-Hur_ICPR.pdf

Journal

...

References

Find more people like Hava Siegelmann

Use the Golden Query Tool to discover related individuals, professionals, or experts with similar interests, expertise, or connections in the Knowledge Graph.
Open Query Tool
Access by API
Golden Query Tool
Golden logo

Company

  • Home
  • Press & Media
  • Blog
  • Careers
  • WE'RE HIRING

Products

  • Knowledge Graph
  • Query Tool
  • Data Requests
  • Knowledge Storage
  • API
  • Pricing
  • Enterprise
  • ChatGPT Plugin

Legal

  • Terms of Service
  • Enterprise Terms of Service
  • Privacy Policy

Help

  • Help center
  • API Documentation
  • Contact Us
By using this site, you agree to our Terms of Service.