As the development of computing hardware, algorithms, and more importantly, availability of large volume of data grows, machine learning technologies have become increasingly popular. Practical systems have been deployed in various domains, like face recognition, automatic video monitoring, and even auxiliary driving. However, the security implications of machine learning algorithms and systems are still unclear. For example, developers still lack deep understanding on adversarial machine learning, one of the unique vulnerability of machine learning systems, and are unable to evaluate the robustness of those machine learning algorithms effectively. The other prominent problem is privacy concerns when applying machine learning algorithms, and as general public are becoming more concerned about their own privacy, more works are definitely desired towards privacy preserving machine learning systems.
Motivated by this situation, this workshop solicits original contributions on the security and privacy problems of machine learning algorithm and systems, including adversarial learning, algorithm robustness analysis, privacy preserving machine learning, etc. We hope this workshop can bring researchers together to exchange ideas on cutting-edge technologies and brainstorm solutions for urgent problems derived from practical applications.
Topics of interest include, but not limited, to followings:
Authors are welcome to submit their papers in following two forms:
Full papers that present relatively mature research results related to security issues of machine learning algorithms, systems, and applications. The paper could be an attack, defense, security analysis, surveys, etc. The submissions for this type must follow the original LNCS format (see LNCS format) with a page limit of 18 pages (including references) for the main part (reviewers are not required to read beyond this limit) and 25 pages in total.
Short papers that describe an on-going work and bring some new insights and inspiring ideas related to security issues of meaching learning algorithms, systems, and applications. Short papers will follow the same LNCS format as full paper (see LNCS format), but with a page limit of 9 pages (including references).
The submissions must be anonymous, with no author names, affiliations, acknowledgement or obvious references. Once accepted, the papers will appear in the formal proceedings. Authors of accepted papers must guarantee that their paper will be presented at the conference and must make their paper available online. There will be a best paper award.
Authors should consult Springer’s authors’ guidelines and use their proceedings templates, either for LaTeX or for Word, for the preparation of their papers. Springer encourages authors to include their ORCIDs in their papers. In addition, the corresponding author of each paper, acting on behalf of all of the authors of that paper, must complete and sign a Consent-to-Publish form, through which the copyright for their paper is transferred to Springer. The corresponding author signing the copyright form should match the corresponding author marked on the paper. Once the files have been sent to Springer, changes relating to the authorship of the papers cannot be made.
EasyChair System will be used for paper submission.
https://easychair.org/my/conference?conf=simla2021
Sudipta Chattopadhyay | Singapore University of Technology and Design | Workshop Chair |
Sakshi Udeshi | Singapore University of Technology and Design | Web Chair |
Chris Poskitt | Singapore Management University |
Shuhao Zhang | Singapore University of Technology and Design |
Wenrui Diao | Shandong University |
Jingyi Wang | Zhejiang University |
Ezekiel Soremekun | SnT, University of Luxembourg |
Shuang Liu | Tianjin University |
Kehuan Zhang | The Chinese University of Hong Kong |
Time Table |
Japan (UTC+9) | Singapore (UTC +8) |
CEST (UTC+2) | UTC | Agenda | Chair | Details |
15:20 | 14:20 | 8:20 | 6:20 | Opening | Sudipta | |
15:30 | 14:30 | 8:30 | 6:30 | Invited Talk | Sudipta | Speaker Name: Dr. Mike Papadakis Affiliation: SnT, University of Luxembourg Title: Adversarial Attacks in ML-Enabled Systems |
16:30 | 15:30 | 9:30 | 7:30 | Break | - | |
16:45 | 15:45 | 9:45 | 7:45 | Paper (30 min each) |
TBD | (1) Towards Demystifying Adversarial Robustness of Binarized Neural Networks (2) Kryptonite: An Adversarial Attack using Regional Focus |
18:00 | 17:00 | 11:00 | 9:00 | Closing | Sudipta |