"Artificial Intelligence (AI) refers to computer systems that think and act like humans, and think and act rationally. AI is rapidly transforming our world with innovations like autonomous vehicles driving our city streets, personal digital assistants in our homes and pockets, and direct human brain interfaces that can help a paralyzed person feel again when using a brain-controlled robotic arm.
In recent years, the field of AI has experienced a remarkable surge in capabilities. Factors contributing to this include: improved machine learning techniques, availability of massive amounts of training data, unprecedented computing power, and mobile connectivity.
AI-enabled systems are beginning to revolutionise fields such as commerce, healthcare, transportation and cybersecurity. It has the potential to impact nearly all aspects of our society including our economy, yet its development and use come with serious technical and ethical challenges and risks. AI must be developed in a trustworthy manner to ensure reliability and safety.
In contrast with deterministic rule-based systems, where reliability and safety may be built in and proven by design, AI systems typically make decisions based on data-driven models created by machine learning. Inherent uncertainties need to be characterized and assessed through standardized approaches to assure the technology is safe and reliable. Evaluation protocols must be developed and new metrics are needed to provide quantitative support to a broad spectrum of standards including data, performance, interoperability, usability, security, and privacy." - National Institute of Standards and Technology (NIST)