Golden
Hava Siegelmann

Hava Siegelmann

A computer scientist, researcher, and program manager for the Information Innovation Office (I2O) of the Defense Advanced Research Projects Agency (DARPA) where she oversees their Guarenteeing AI Robustness Against Deception (GARD) program and their Lifelong Learning Machines (L2M) program.

Hava Siegelmann is a computer scientist, researcher, and program manager for the Information Innovation Office (I2O) of the Defense Advanced Research Projects Agency (DARPA). Siegelmann's work with DARPA focuses on advancing the intelligence of computerized devices through their Gaurenteeing AI Robustness Against Deception (GARD) program, and their Lifelong Learning Machines (L2M) program. Siegelmann's scientific research is primarily focused on creating biologically inspired computational systems capable of exhibiting intelligent behavior. 

Education

Israel Institute of Technology

Hava Siegelmann attended the Israel Institute of Technology from 1984 to 1988, and graduated with a bachelor of arts degree in computer science. 

The Hebrew University

Hava Siegelmann attended The Hebrew University from 1991 to 1992 where she completed a master of science degree in computer science. For her masters thesis, Siegelmann published a paper in 1992 titled "Document Allocation in Multiprocessor Information Retrieval Systems: An Application of Genetic Algorithms".

Rutgers University

Hava Siegelmann attended Rutgers University from 1991 to 1993 where she completed her Ph.D in computer science. Her Ph.D thesis was titled "Foundations of Recurrent Neural Networks" and was published in the journal Science in October of 1993, and was written under the direction of Professor Eduardo D. Sontag.

Career

Israel Institute of Technology

From 1994 to 2000 (6 years), Hava Siegelmann served as the head of information systems engineering for the Israel Institute of Technology. 

Massachusetts Institute of Technology

From 2001 to 2001 (1 year), Hava Siegelmann was an assistant professor at the Massachusetts Institute of Technology. 

Harvard University

From 2008 to 2010 (2 years), Hava Siegelmann was a researcher at Harvard University. During her time at Harvard University as a researcher, Siegelmann researched evolutionary dynamics with applications to cellular biology. 

University of Massachusetts

In 2001 (to present), Hava Siegelmann has been serving as a professor at the University of Massachusetts, and a Core Member of their Neuroscience and Behavior Program. She is also the director of the Biologically Inspired Neural and Dynamical Systems (BINDS) laboratory at the University of Massachusetts Amherst, where she runs computational research on memory, circadian systems, cancer, and neurodegenerative diseases. 

Defense Advanced Research Projects Agency (DARPA)

In July 2016 joined the Defense Advanced Research Projects Agency (DARPA) as the program manager for their information innovation office (I20). Her role is to develop programs for advancing the intelligence of computerized devices, life-long learning machines, context-aware adaptivity, and user-centered applications.



In 2019, Siegelmann also created the Guarenteeing AI Robustness Against Deception (GARD) program with DARPA. The GARD program was created to research the vulnerability of machine learning (ML) platforms, and develop secure ML platforms by making them less vulnerable to adversarial deception attacks. Siegelmann made the following comments regarding the purpose of the GARD program: 

The GARD program seeks to prevent the chaos that could ensue in the near future when attack methodologies, now in their infancy, have matured to a more destructive level. We must ensure ML is safe and incapable of being deceived. The kind of broad scenario-based defense we’re looking to generate can be seen, for example, in the immune system, which identifies attacks, wins and remembers the attack to create a more effective response during future engagements. 

Hava Siegelmann also oversees the Lifelong Learning Machines (L2M) program for DARPA that launched in 2019. The L2M program focuses on two primary areas: developing computational frameworks for applying learned lessons from new data or circumstances, and to find applicable examples of how biological systems naturally improve and adapt to their environments. Siegelmann made the following comments regarding the L2M program: 

Life has had billions of years to develop approaches for learning from experience. There are almost certainly some secrets there that can be applied to machines so they can be not just computational tools to help us solve problems but responsive and adaptive collaborators. 

Editorial roles

Hava Siegelmann acts as a professional scientist that reviews submissions to the following scientific journals: Journal of Theoretical Biology, Neural Computation, Theoretical Computer Science, J. of Complexity, Neural networks World, Neural Networks, Connection Science, Cognitive Science, IEEE Trans on Neural Networks, and Physics Review letters. She is also the Associate Editor of Frontiers in Computational Neuroscience, and an editorial board member of the American Institute of Physics Journal Chaos: An Interdisciplinary Journal of Nonlinear Science. 

Achievements and awards

Hebb Award of the International Neural Network Society

In 2016, Hava Siegelmann was awarded the Hebb Award from the International Neural Network Society. She won the award for her contributions to biological learning, and it was presented to her in July 2016 at the World Conference on Computational Intelligence in Vancouver, British Columbia, Canada. 

Co-originated Support Vector Clustering

Hava Siegelmann is one of the co-creators of support vector clustering; a popular clustering algorithm used in industry applications. 

Super-Turing computation theory

Hava Siegelmann is the creator of Super-Turing computation theory. Her theory was published in Science in 1993 for her Ph.D thesis at Rutgers University. The thesis was titled "Foundations of Recurrent Neural Networks. She would later published a book on her theory titled "Neural networks and analog computation: Beyond the Turing Limit" in 1998. Siegelmann came up with the Super-Turing computational theory after re-reading the works of the creator of the Turing model, Alan Turing, and attributes her success building her theory to being young and curious: 

I was young enough to be curious, wanting to understand why the Turing model looked really strong. I tried to prove the conjecture that neural networks are very weak and instead found that some of the early work was faulty. I was surprised to find out via mathematical analysis that the neural models had some capabilities that surpass the Turing model. So I re-read Turing and found that he believed there would be an adaptive model that was stronger based on continuous calculations. 

Her theory details an adaptive computational system that learns and evolves as is executes using neural networks. When describing what her Super-Turing computational model offers, Siegelmann says: 

Each time a Super-Turing machine gets input it literally becomes a different machine,” Siegelmann says. “You don’t want this for your PC. They are fine and fast calculators and we need them to do that. But if you want a robot to accompany a blind person to the grocery store, you’d like one that can navigate in a dynamic environment. If you want a machine to interact successfully with a human partner, you’d like one that can adapt to idiosyncratic speech, recognize facial patterns and allow interactions between partners to evolve just like we do. That’s what this model can offer.

Computational models developed using the Super-Turing computational model exhibit a 2 to the power aleph-zero possible behaviours, which is much greater than the computational models built using the original Turing model. For example, if a machine built using the Turing model was made to have 500 distinct behaviors, a machine built using the Super-Turing computational model based on the same 500 behaviors would have 2 to the power of 500 possible behaviors. 



Timeline

July 23, 2019

Began working with the Defense Advanced Research Projects Agency (DARPA)

In July 2016 joined the Defense Advanced Research Projects Agency (DARPA) as the program manager for their information innovation office (I20).

July 2016

Hebb Award of the International Neural Network Society

In July 2016, Hava Siegelmann was awarded the Hebb Award from the International Neural Network Society.

2008

Worked as a researcher at Harvard University

From 2008 to 2010 (2 years), Hava Siegelmann was a researcher at Harvard University. 

2001

Worked as a Professor at the Massachusetts Institute of Technology

From 2001 to 2001 (1 year), Hava Siegelmann was an assistant professor at the Massachusetts Institute of Technology. 

2001

University of Massachusetts

In 2001 (to present), Hava Siegelmann became a professor at the University of Massachusetts and a Core Member of their Neuroscience and Behavior Program. She is also the director of the Biologically Inspired Neural and Dynamical Systems (BINDS) laboratory at the University of Massachusetts Amherst

1994

Worked as head of information systems engineering at the Israel Institute of Technology

From 1994 to 2000 (6 years), Hava Siegelmann served as the head of information systems engineering for the Israel Institute of Technology. 

July 23, 1993

Publishing of Super-Turing computation theory

Hava Siegelmann is the creator of Super-Turing computation theory. Her theory was published in Science in 1993 for her Ph.D thesis at Rutgers University. 

July 23, 1991

Attended Rutgers University

Hava Siegelmann attended Rutgers University from 1991 to 1993 where she completed her Ph.D in computer science.

1991

Attended The Hebrew University

Hava Siegelmann attended The Hebrew University from 1991 to 1992 where she completed a master of science degree in computer science. 

1984

Attended the Israel Institute of Technology

Hava Siegelmann attended the Israel Institute of Technology from 1984 to 1988, and graduated with a bachelor of arts degree in computer science. 

People

Name
Role
LinkedIn







Further reading

Title
Author
Link
Type
Date

'Super-Turing' machine learns and evolves | Kurzweil

https://kurzweilai.net/

Web

April 9, 2012

'We Paid Little Attention to Vulnerabilities in Machine Learning Platforms': DARPA - The Sociable

Tim Hinchcliffe

Web

February 19, 2019

A generic Approach for Identification of Event Related Brain Potentials via a Competative Neural Network Structure

Daniel Lange, Hava Siegelmann, Hillel Pratt, Gideon Inbar

Journal



A Multi-Agent System that Attains Longevity via Death

Megan Olsen, Hava T Siegelmann

Journal



A Support Vector Method for Clustering

Asa Ben-Hur, David Horn, Hava T Siegelmann, Vladimir Vapnik

Journal



A Support Vector Method for Hierarchical Clustering

Asa Ben-Hur, David Horn, Hava T Siegelmann, Vladimir Vapnik

Journal



Active Information Retrieval

Tommi Jaakkola, Hava Siegelmann

Journal



Adaptive Multi-Modal Sensors

Kyle I Harrington, Hava T Siegelmann

Journal



An Integrated Symbolic and Neural Network Architecture for Machine Learning in the Domain of Nuclear Engineering

Ephraim Nissan, Hava Siegelmann, Alex Galperin

Journal



Applying Modular Networks and Fuzzy-Logic controllers to nonlinear Flexible Structures

Hava T Siegeimam, Azmon Ofki, Hugo Guterman+

Journal



Artificial Death for Attaining System Longevity

Megan Olsen, Hava Siegelmann

Journal



Attractor Systems and analog Computation

C Jain, R K Jain, Hava T Siegelmann, Shmuel Fishman

Journal

April 1998, 1998

BindsNET: A Machine Learning-Oriented Spiking Neural Networks Library in Python

Hananel Hazan*, Daniel J. Saunders*, Hassaan Khan, Devdhar Patel, Darpan T. Sanghavi, Hava T. Siegelmann and Robert Kozma

Journal

December 12, 2018

Can AI Systems Learn How to Learn?

Kevin McCaney

Web



Computation by Dynamical Systems

Hava T Siegelmaiin

Journal



Computation in Gene Networks

Asa Ben-Hur, Hava T Siegelmann

Journal



Development of Physical Super-Turing Analog Hardware

A. Steven Younger, Emmett Redd, Hava Siegelmann

Chapter

2014

Dr. Hava Siegelmann

DARPA

Web



Emotional Robotics: Tug of War

David Grant Cooper, Dov Katz, Hava T Siegelmann

Journal



Emotions for Strategic Real-Time Systems

Megan M Olsen, Kyle Harrington, Hava T Siegelmann

Journal



Energetic Constraints Produce Self-sustained Oscillatory Dynamics in Neuronal Networks

Javier Burroni, P. Taylor, Cassian Corey, Tengiz Vachnadze, and Hava Siegelmann

Journal

February 27, 2017

Error Forward-Propagation: Reusing Feedforward Connections to Propagate Errors in Deep Learning

Adam A. Kohan, Edward A. Rietman, Hava T. Siegelmann

Journal

August 9, 2018

Exact Neural Inference Over Graphical Models

Lars E Holzman, Hava T Siegelmann

Journal



EyeFrame: real-time memory aid improves human multitasking via domain-general eye tracking procedures

P. Taylor, Ze He, Noah Bilgrien, and Hava Siegelmann

Journal

September 2, 2015

Gibbs free energy as a measure of complexity correlates with time within C. elegans embryonic development. - PubMed - NCBI

McGuire SH, Rietman EA, Siegelmann, Tuszynski JA

Journal

September 19, 2017

Documentaries, videos and podcasts

Title
Date
Link

Artificial Intelligence Colloquium: Lifelong and Robust Machine Learning

March 26, 2019

ECE 804 - Spring 2012 - Lecture 004 with Dr. Hava Siegelmann - Feb. 24 ,2012

February 28, 2013

ECE 804 - Spring 2012 - Lecture 004 with Dr. Hava Siegelmann - Feb. 24 ,2012

February 28, 2013

Evolving Complex Systems in Biology and Medicine

January 31, 2014

Hava Siegelmann, IJCNN 2017 Plenary Talk: Understanding Some Brain's Computational Mechanisms Pt. 1

July 18, 2017

Hava Siegelmann, IJCNN 2017 Plenary Talk: Understanding Some Brain's Computational Mechanisms Pt. 2

July 20, 2017

Hava Siegelmann, IJCNN 2017 Plenary Talk: Understanding Some Brain's Computational Mechanisms Pt. 3

July 20, 2017

Irakli Beridze, Hava Siegelmann, Seán Ó hÉigeartaigh & Ehrik L. Aldana - AI Race panel

September 13, 2014

Lifelong learning machines (L2M) - Hava Siegelmann keynote at HLAI

March 22, 2019

Lifelong Learning Machines (L2M) Proposers Day

November 29, 2017

Panel Discussion: Scientific Funding for Deep Learning

June 12, 2019

Companies

Company
CEO
Location
Products/Services









References