Joanna J Bryson
cs.bath.ac.uk/~jjb
hertie-school.org/en/who-we-are/profile/person/bryson
Personal website - joannajbryson.org
facebook.com/joanna.j.bryson
twitter.com/j2bryson
linkedin.com/in/bryson
Joanna Bryson on Wikipedia
Printable View
Joanna J Bryson
cs.bath.ac.uk/~jjb
hertie-school.org/en/who-we-are/profile/person/bryson
Personal website - joannajbryson.org
facebook.com/joanna.j.bryson
twitter.com/j2bryson
linkedin.com/in/bryson
Joanna Bryson on Wikipedia
https://youtu.be/zOOyXFf_b48
Joanna Bryson, Robots, Science, & Simulated Society - Ignite University of Bath
Published on May 9, 2013
Quote:
Joanna Bryson from the Department of Computer Science gives her talk "Robots, Science, & Simulated Society: How AI Helps Us Change Our World" at Ignite University of Bath #3 on 20 March 2013.
https://youtu.be/_y3p4OuH9So
Joanna Bryson, Professor at University of Bath - Machine Intelligence Summit NY 2016
Published on Nov 21, 2016
Quote:
Why AI Must Be Biased, and How We Can Respond
Like physics and biology, computation is a natural process with natural laws. We are making radical progress in artificial intelligence because we have learnt to exploit machine learning to capture existing computational outputs developed and transmitted by humans with human culture. This powerful strategy unfortunately undermines the assumption that machined intelligence, deriving from mathematics, would be pure and neutral, providing a fairness beyond what is present in human society. In learning the set of biases that constitute a word's meaning, AI also learns patterns some of which are based on our unfair history. Addressing such prejudice requires domain-specific interventions.
Joanna J. Bryson is a transdisciplinary researcher on the structure and dynamics of human- and animal-like intelligence. Her research covers topics ranging from artificial intelligence, through autonomy and robot ethics, and on to human cooperation. She holds degrees in Psychology from Chicago (AB) and Edinburgh (MPhil), and Artificial Intelligence from Edinburgh (MSc) and MIT (ScD). She has additional professional research experience from Oxford, Harvard, and LEGO, and technical experience in Chicago's financial industry, and international organization management consultancy. Bryson is presently a Reader (associate professor) at the University of Bath, and an affiliate of Princeton's Center for Information Technology Policy.
https://youtu.be/f_pEM8hI97s
Dr Joanna Bryson Interview | Conscious Cities Conference No.2
Published on May 18, 2017
Quote:
Joanna J. Bryson is a transdisciplinary researcher on the structure and dynamics of human- and animal-like intelligence. Her research covers topics ranging from artificial intelligence, through autonomy and robot ethics, and on to human cooperation. She has professional research experience from Oxford, Harvard, and LEGO, and technical experience in Chicago’s financial industry. Bryson is presently a Reader (associate professor) at the University of Bath, and an affiliate of Princeton’s Center for Information Technology Policy.
Conscious Cities Conference No. 2:
Bridging Neuroscience, Architecture and Technology took place on Wednesday, 03 May 2017 in London.
The second conscious cities conference gathered the different industries and elements needed to build a Conscious City, that is responsive to human activity and needs.
The conference addressed four themes, each presented and discussed by a panel of experts from academia and industry:
1. What Does Neuroscience Teach Us About the Built Environment?
2. How Can We Use High Technology in the Built Environment?
3. Creating Conscious Design: How Does Behavioural Insight Affect Architecture and Planning?
4. Building a Conscious City: The Role of Governance and Industry.
https://youtu.be/Nefo1Mr6qoE
Why creating AI that has free will would be a huge mistake | Joanna Bryson
Published on May 30, 2018
Quote:
AI expert Joanna Bryson posits that giving artificial intelligence the same rights a human has could result in pretty dire consequences... because AI has already proven that it can pick up negative human characteristics if those characteristics are in the data. Therefore, it's not crazy at all to think that AI could scan every YouTube comment in one afternoon and pick up all the negativity we've unloaded there. If it's already proven it's not only capable of making the wrong decision but eventually will make the wrong decision when it comes to data mining and implementation, why even give it the same powers as us in the first place?