« May Wage Data Mixed: Are Shunto Hikes Spreading to Rest of the Workforce? | Main | Japan Review Early Access articles »

August 8, 2024

(July 19) Symposium: US-Japan AI Security: Adversarial AI Risks and Mitigation Strategies for Disinformation and Cyber Threats

From: Akira Igata <igata@ip.rcast.u-tokyo.ac.jp>
Date: 2024/07/11

Dear All:

The Economic Security Research Program (ESRP) at RCAST, The University of Tokyo and Sasakawa Peace Foundation USA will co-host a symposium titled "US-Japan AI Security: Adversarial AI Risks and Mitigation Strategies for Disinformation and Cyber Threats."
This symposium will bring together a representative from the Department of Homeland Security in charge of AI policies in the U.S., as well as three world-renowned AI researchers from Lawrence Livermore National Laboratory, Oak Ridge National Laboratory, and Pacific Northwest National Laboratory. We hope you will join us to learn about cutting-edge AI research and the opportunities and risks posed by its social implementation.
Japanese-English simultaneous interpretation is provided. Pre-registration required.

Please note that if there are a large number of applicants, participants may be selected through a lottery.

【Details】
Today, there is a growing attention on societal impacts of the rapid development of Artificial Intelligence. While the utilization of cutting-edge AI technology offers various advantages, the development and use of "adversarial AI," or AI with malicious intent or attacks against AI, has emerged as a serious economic security risk for nations around the world.

Against this backdrop, the Economic Security Research Program (ESRP) at RCAST, the University of Tokyo and the Sasakawa Peace Foundation USA will co-host a symposium titled "US-Japan AI Security: Adversarial AI Risks and Mitigation Strategies for Disinformation and Cyber Threats".

The symposium will feature speakers from the U.S. government in charge of developing policies to promote and regulate AI technology development, as well as researchers from Lawrence Livermore National Laboratory (LLNL), Oak Ridge National Laboratory (ORNL), and Pacific Northwest National Laboratory (PNNL), leading national research institutes for AI research in the United States. They will introduce cutting-edge AI technology developments in the U.S. and explore risks and opportunities these technologies will bring to our society . In particular, we will discuss how to deal with AI-enabled disinformation and cyber-attacks, ideal promotion and regulatory means for the government, and the future of Japan-US cooperation on AI related issues.

【Date and Time】
July 19th (Friday), 14:30-16:00 (doors open at 13:45)

【Venue】
ENEOS Hall, Building #3-S, Komaba II Campus, Research Center for Advanced Science and Technology (RCAST), The University of Tokyo
Address: 4-6-1 Komaba, Meguro-ku, Tokyo 153-8904 JAPAN
Maphttps://www.rcast.u-tokyo.ac.jp/en/access.html

【Languages】
English (Japanese-English simultaneous interpretation will be provided)
 
【Panelists】
Amy Henninger (Senior Advisor and Branch Chief Advanced Computing, US Department of Homeland Security)

Maria Glenski (Data Scientist and Leader of the Foundational Data Science Group, National Security Directorate, Pacific Northwest National Laboratory)

Edmon Begoli (Director, Center for AI Security Research; Oak Ridge National Laboratory)

Michael Goldman (Associate Program Leader in Global Security, Lawrence Livermore National Laboratory)
【Moderator】
Akira Igata (Project Lecturer, RCAST, The University of Tokyo)

【Event Page】
https://esrp.rcast.u-tokyo.ac.jp/events/events-1806/?lang=en

【Registration form】
https://docs.google.com/forms/d/e/1FAIpQLScfcUcmllY3OEzjSfvJnWa-1azoqLKU87p1J8xac7HkR_sDzw/viewform

I will take plenty of time for Q&A - looking forward to seeing you all there!

Best Regards,
Akira

--

********************************************************************
Akira Igata
Project Lecturer, 
Research Center for Advanced Science and Technology, 
The University of Tokyo
Tel: 080-7888-2435
********************************************************************

Approved by ssjmod at 09:26 PM