To authors: A regular paper presentation would be 15 mins recorded presentation plus 3 mins Q&A.
To attendees: All times in the main program are in Eastern Daylight Time (UTC-04:00).
Recent advances in cloud services and outsourced computation provide a promising paradigm for applications that generate, collect or process large amounts of sensitive data. However, they introduce significant security and privacy issues. Among others, ensuring proper access control and trust on the infrastructure is a crucial challenge. In this talk, I will overview the challenges and solutions related to attribute-based encryption for access control and related transparency issues. I will present some of our recent work related to integrated privacy-preserving user-centric access control supporting secure deduplication to address key security and privacy challenges in cloud services. Such attribute-based encryption approaches, as well as other emerging cryptographic mechanisms for secure computation typically employ a third-party authority (TPA) as an integral component that need to be trusted. Recent work on certificate transparency approaches provides a promising direction to address such general trust issues related to a TPA. We will present our recent work tailored towards such authority transparency issues and discuss challenges ahead.
James Joshi is a professor of School of Computing and Information at the University of Pittsburgh, and the director/founder of the Laboratory of Education and Research on Security Assured Information Systems (LERSAIS). He is currently serving as an NSF Program Director in the Computer and Network System (CNS) division and in the Secure and Trustworthy Cyberspace (SaTC) program. He is an elected Fellow of the Society of Information Reuse and Integration (SIRI), a Senior member of the IEEE and a Distinguished Member of the ACM. His research interests include access control models, security and privacy of distributed systems, trust management and network security. He is a recipient of the NSF CAREER award in 2006. He established and managed the NSF CyberCorp Scholarship for Service program at Pitt in 2006. He has served as program co-chair and/or general co-chair of several international conferences/workshops, including, ACM SACMAT, IEEE BigData, IEEE IRI, IEEE CIC, IEEE ISM, IEEE EDGE, etc. He currently serves as the steering committee chair of IEEE CIC/TPS/CogMI. Currently, he is the EIC of the IEEE Transactions on Services Computing. He had also served in or is in the editorial board of several international journals. He has published over 120 articles as book chapters and papers in journals, conferences and workshops, and has served as a special issue editor of several journals including Elsevier Computer & Security, ACM TOPS, Springer MONET, IJCIS, and Information Systems Frontiers. His research has been supported by NSF, NSA/DoD, and Cisco. Earlier in 1995, he had led the efforts to establish the first Computer Science & Engineering undergraduate degree program in Nepal.
Artificial Intelligence (AI) is affecting every aspect of our lives from healthcare to finance to driving to managing the home. Sophisticated machine learning techniques with a focus on deep learning are being applied successfully to detect cancer, to make the best choices for investments, to determine the most suitable routes for driving as well as to efficiently manage the electricity in our homes. We expect AI to have even more influence as advances are made with technology as well as in learning, planning, reasoning and explainable systems. While these advances will greatly advance humanity, organizations such as the United Nations have embarked on initiatives such as “AI for Good” and we can expect to see more emphasis on applying AI for the good of humanity especially in developing countries. However, the question that needs to be answered is Can AI be for Good when the AI techniques can be attacked and the AI techniques themselves can cause privacy violations? This position paper will provide an overview of this topic with protecting children and children’s rights as an example.
Dr. Bhavani Thuraisingham is the Founders Chair Professor of Computer Science and the Executive Director of the Cyber Security Research and Education Institute at the University of Texas at Dallas (UTD). She is also a visiting Senior Research Fellow at Kings College, University of London and an elected Fellow of the ACM, IEEE, the AAAS, the NAI and the BCS. Her research interests are on integrating cyber security and artificial intelligence/data science for the past 35 years (where it used to be computer security and data management/mining) and she served as a Cyber Security Policy Fellow at the New America Foundation in 2017-8. She has received several prestigious awards including the IEEE CS 1997 Technical Achievement Award, ACM SIGSAC 2010 Outstanding Contributions Award, the IEEE Comsoc Communications and Information Security 2019 Technical Recognition Award, the IEEE CS Services Computing 2017 Research Innovation Award, the ACM CODASPY 2017 Lasting Research Award, the IEEE ISI 2010 Research Leadership Award, the 2017 Dallas Business Journal Women in Technology Award, and the ACM SACMAT 10 Year Test of Time Awards for 2018 and 2019 (for papers published in 2008 and 2009). She co-chaired the Women in Cyber Security Conference (WiCyS) in 2016 and delivered the featured address at the 2018 Women in Data Science (WiDS) at Stanford University and serves as the Co-Director of both the Women in Cyber Security and Women in Data Science Centers at UTD. Her 40-year career includes industry (Honeywell), federal research laboratory (MITRE), US government (NSF) and US Academia. Her work has resulted in 130+ journal articles, 300+ conference papers, 160+ keynote and featured addresses, six US patents, fifteen books as well as technology transfer of the research to commercial products and operational systems. She has also given featured addresses on data mining for counter-terrorism at the United Nations in New York and at the White House Science and Technology Policy in Washington DC. She received her PhD from the University of Wales, Swansea, UK, and the prestigious earned higher doctorate (D. Eng) from the University of Bristol, UK.
Fueled by massive amounts of data, models produced by machine-learning (ML) algorithms, especially deep neural networks (DNNs), are being used in diverse domains where trustworthiness is a concern, including automotive systems, finance, healthcare, natural language processing, and malware detection. Of particular concern is the use of ML algorithms in cyber-physical systems (CPS), such as self-driving cars and aviation, where an adversary can cause serious consequences. Interest in this area of research has simply exploded. In this work, we will cover the state-of-the-art in trustworthy machine learning, and then cover some interesting future trends.
Somesh Jha received his B.Tech from Indian Institute of Technology, New Delhi in Electrical Engineering. He received his Ph.D. in Computer Science from Carnegie Mellon University under the supervision of Prof. Edmund Clarke (a Turing award winner). Currently, Somesh Jha is the Lubar Professor in the Computer Sciences Department at the University of Wisconsin (Madison). His work focuses on analysis of security protocols, survivability analysis, intrusion detection, formal methods for security, and analyzing malicious code. Recently, he has focussed his interested on privacy and adversarial ML (AML). Somesh Jha has published several articles in highly-refereed conferences and prominent journals. He has won numerous best-paper and distinguished-paper awards. Prof Jha also received the NSF career award. Prof. Jha is the fellow of the ACM and IEEE.