To authors: A regular paper presentation would be 20 mins live presentation and Q&A. A dataset/tool and poster paper presentation would be 10 mins live presentation and Q&A.
To attendees: All times in the main program are in Eastern Daylight Time (UTC-04:00).
Main Room (Sessions 1-7 and Keynote Talks): 994 8720 4200
Poster Sessions: 928 2550 5926
Statistical machine learning uses training data to produce models that capture patterns in that data. When models are trained on private data, such as medical records or personal emails, there is a risk that those models not only learn the hoped-for patterns, but will also learn and expose sensitive information about their training data. Several different types of inference attacks on machine learning models have been found, and methods have been proposed to mitigate the risks of exposing sensitive aspects of training data. Differential privacy provides formal guarantees bounding certain types of inference risk, but, at least with state-of-the-art methods, providing substantive differential privacy guarantees requires adding so much noise to the training process for complex models that the resulting models are useless. Experimental evidence, however, suggests that inference attacks have limited power, and in many cases a very small amount of privacy noise seems to be enough to defuse inference attacks. In this talk, I will give an overview of a variety of different inference risks for machine learning models, talk about strategies for evaluating model inference risks, and report on some experiments by our research group to better understand the power of inference attacks in more realistic settings, and explore some broader the connections between privacy, fairness, and adversarial robustness.
David Evans is a Professor of Computer Science at the University of Virginia where he leads a research group focusing on security and privacy (https://uvasrg.github.io/). He won the Outstanding Faculty Award from the State Council of Higher Education for Virginia, and was Program Co-Chair for the 24th ACM Conference on Computer and Communications Security (CCS 2017) and the 30th (2009) and 31st (2010) IEEE Symposia on Security and Privacy, where he initiated the Systematization of Knowledge (SoK) papers. He is the author of an open computer science textbook (https://computingbook.org/) and a children's book on combinatorics and computability (https://dori-mic.org/), and co-author of a book on secure multi-party computation (https://securecomputation.org/). He has SB, SM and PhD degrees from MIT and has been a faculty member at the University of Virginia since 1999.
On one side, the security industry has successfully adopted some AI-based techniques. Use varies from mitigating denial of service attacks, forensics, intrusion detection systems, homeland security, critical infrastructures protection, sensitive information leakage, access control, and malware detection. On the other side, we see the rise of Adversarial AI. Here the core idea is to subvert AI systems for fun and profit. The methods utilized for the production of AI systems are systematically vulnerable to a new class of vulnerabilities. Adversaries are exploiting these vulnerabilities to alter AI system behavior to serve a malicious end goal. This panel discusses some of these aspects.
Security measurement helps identify deployment gaps and present extremely valuable research opportunities. However, such research is often deemed as not novelty by academia. I will first share my research journey designing and producing a high-precision tool CryptoGuard for scanning cryptographic vulnerabilities in large Java projects. That work led us to publish two benchmarks used for systematically assessing state-of-the-art academic and commercial solutions, as well as help Oracle Labs integrate our detection in their routine scanning. Other specific measurement and deployment cases to discuss include the Payment Card Industry Data Security Standard, which was involved in high-profile data breach incidents, and fine-grained Address Space Layout Randomization. The talk will also point out the need for measurement in AI development in the context of code repair. Broadening research styles by accepting and encouraging deployment-related work will facilitate our field to progress towards maturity.
Dr. Danfeng (Daphne) Yao is a Professor of Computer Science at Virginia Tech. She is an Elizabeth and James E. Turner Jr. ’56 Faculty Fellow and CACI Faculty Fellow. Her research interests are on building deployable and proactive cyber defenses, focusing on detection accuracy and scalability. She creates new models, algorithms, techniques, and deployment-quality tools for securing large-scale software and systems. Her tool CryptoGuard helps large software companies and Apache projects harden their cryptographic code. She systematized program anomaly detection in the book Anomaly Detection as a Service. Her patents on anomaly detection are extremely influential in the industry, having been cited by over 200 other patents from major cybersecurity firms and technology companies, including FireEye, Symantec, Qualcomm, Cisco, IBM, SAP, Boeing, and Palo Alto Networks. Her IEEE TIFS papers on enterprise data loss prevention were viewed 30,000 times. Dr. Yao was a recipient of the NSF CAREER Award and ARO Young Investigator Award. Dr. Yao is the ACM SIGSAC Treasurer/Secretary and is a member of the ACM SIGSAC executive committee since 2017. She spearheads multiple inclusive excellence initiatives, including the NSF-sponsored Individualized Cybersecurity Research Mentoring (iMentor) Workshop and the Women in Cybersecurity Research (CyberW) Workshop. Daphne received her Ph.D. degree from Brown University, M.S. degrees from Princeton University and Indiana University, Bloomington, B.S. degree from Peking University in China.
The field of cybersecurity is becoming very dynamic, and needs continuous evolution. This requires not only the formal and in-formal education, but a security mindset to be developed for our future workforce. This panel elaborates on some such aspects.
We are living in a complex world that is rapidly evolving due to technology. The WWW and Social Media have eliminated boundaries and social norms and with COVID-19 the work environment has drastically changed. While there are numerous career opportunities in Computer Science in general and Cyber Security and Artificial Intelligence/Data Science in particular, the competition is also extremely intense around the globe. It is almost impossible for a person to succeed in his/her career without the advice and mentorship of the senior researchers, developers and technologists. Almost every person I have known who has succeeded has had a mentor (in many cases mentors) who have guided him/her and supported him/her during the early stages of his/her career. Therefore, every career professional must have a mentor regardless of gender, race/ethnicity and age.
Lack of mentorship is perhaps the most important reason why women and minority communities have not done as well in their careers especially in lucrative fields like Cyber Security; another could be bias. Lack of opportunities start at an early age as boys are given preferences over girls in almost every culture and as time progresses girls are left behind in schools, colleges and in the workforce. So, women mainly work to supplement their husbands’ incomes. Minority communities also have a tremendous disadvantage as often their parents are not as educated as those from the non-minority communities and so minority boys and girls have a huge handicap. If the women and minority communities are fortunate enough to get an education and a good job, there are very few from these communities who are at higher positions and so the junior researchers, developers and technologists are often ignored and left to fend for themselves. They see their non-minority colleagues thrive possibly due to the extensive mentoring they receive and get frustrated and that gets them into a vicious cycle.
What is the solution to this huge problem? The first step is to realize that there is a problem; people, especially those in non-minority communities do not realize there is a problem. Thanks to the #MeToo movement and the Black Lives Matter (BLM) movement, people are getting more educated about the problem. As a result, there is much more awareness about Diversity, Equity, and Inclusion (DEI); it is not about giving a job to a person because she is a woman, its about building a safe and inclusive work environment where everyone can thrive. We must not only focus on the advancement of women which is a must, we must also include every underrepresented community including African Americans, Latino Americans, Native Americans, LGBTQ Americans, People with disability, Autistic Individuals and the Elderly. We have to go beyond our own gender race/ethnicity and help everyone to succeed. Every organization must have policies for Diversity, Equity and Inclusion (DEI). Good mentoring will enable a person to understand the culture of the organization and what it takes to succeed. That is, mentoring is essential to support DEI. We need Domain Specific Mentors (e.g., Cyber Security, Data Science) and not generalists (e.g., Psychologists); only those working in your field really understand what you need to do to advance in your education/career (e.g. top journals vs top conference publications for tenure).
This presentation will start with a discussion of DEI and then discuss the importance of mentoring to support DEI in fields like cyber security and data science. It will give examples of my personal story on how lack of mentoring was initially tough on my career and then how I chose mentors who have then supported me and helped me to thrive in my career in cyber security and data science. I will also give my top ten reasons as to why a career in cyber security / data science will benefit the women and underrepresented minority communities.
Dr. Bhavani Thuraisingham is the Founders Chair Professor of Computer Science and the Executive Director of the Cyber Security Research and Education Institute at the University of Texas at Dallas (UTD). She is also a visiting Senior Research Fellow at Kings College, University of London and an elected Fellow of the ACM, IEEE, the AAAS, the NAI and the BCS. She was a Cyber Security Policy Fellow at the New America Foundation for 2017-2018 and focused on engaging rural America in cyber security. Her research interests are on integrating cyber security and artificial intelligence/data science including as they relate to public policy for the past 35 years (where it used to be computer security and data management/mining). She has received several technical and leadership awards including the IEEE CS 1997 Technical Achievement Award, ACM SIGSAC 2010 Outstanding Contributions Award, the IEEE Comsoc Communications and Information Security 2019 Technical Recognition Award, the IEEE CS Services Computing 2017 Research Innovation Award, the ACM CODASPY 2017 Lasting Research Award, the IEEE ISI 2010 Research Leadership Award, and the ACM SACMAT 10 Year Test of Time Awards for 2018 and 2019 (for papers published in 2008 and 2009).
She has worked tirelessly to support women and minority groups in Cyber Security and Data Science. Out of the 23 PhD students she would have graduated by between 2008 and 2022, at least 50% are women and they also include members of the African American, Latino American and the LGBTQ communities. She co-chaired the Women in Cyber Security Conference (WiCyS) in 2016 and delivered the featured address at the 2018 Women in Data Science (WiDS) at Stanford University serves as the Co-Director of both the Women in Cyber Security and Women in Data Science Centers at UTD. She has spent around 20 years promoting Diversity, Equity and Inclusion in Cyber Security and Data Science and has chaired multiple panels including her recent panel at IEEE ISI 2020 (Intelligence and Security Informatics) and gave multiple keynote/featured addresses at Cyber-W, iMentor, SWE, WITI, Girls Who Code, and WICE (Women in Communications Engineering) celebrating International Women’s Day. She gives talks on cyber security at DFW public libraries and is an official mentor to junior faculty as well as high school students in DFW. She received the Women in Technology Awards from the Dallas Business Journal in 2017 and the Woman of Color Leadership from Career Communications Inc. in 2001. She was named one of 500 most influential business leaders in North Texas for 2021 by the D Magazine’s D CEO Magazine. She is the recipient of IEEE Cyber Security and Cloud’s 2021 Special Recognition Award for her tireless work in promoting Diversity, Equity and Inclusion among women and underrepresented minority communities.
Her 40-year career includes industry (Honeywell), federal research laboratory (MITRE), US government (NSF) and US Academia. Her work has resulted in 130+ journal articles, 300+ conference papers, 180+ keynote and featured addresses, seven US patents, fifteen books, podcasts as well as technology transfer of the research to commercial products and operational systems. She received her PhD from the University of Wales, Swansea, UK, and the prestigious earned higher doctorate (D. Eng) from the University of Bristol, UK. She has a Certificate in Public Policy Analysis from the London School of Economics (May 2021).