• Skip to main content
  • Skip to primary sidebar
AAAI

AAAI

Association for the Advancement of Artificial Intelligence

    • AAAI

      AAAI

      Association for the Advancement of Artificial Intelligence

  • About AAAIAbout AAAI
    • News
    • Officers and Committees
    • Staff
    • Bylaws
    • Awards
      • Fellows Program
      • Classic Paper Award
      • Dissertation Award
      • Distinguished Service Award
      • Allen Newell Award
      • Outstanding Paper Award
      • AI for Humanity Award
      • Feigenbaum Prize
      • Patrick Henry Winston Outstanding Educator Award
      • Engelmore Award
      • AAAI ISEF Awards
      • Senior Member Status
      • Conference Awards
    • Partnerships
    • Resources
    • Mailing Lists
    • Past Presidential Addresses
    • AAAI 2025 Presidential Panel on the Future of AI Research
    • Presidential Panel on Long-Term AI Futures
    • Past Policy Reports
      • The Role of Intelligent Systems in the National Information Infrastructure (1995)
      • A Report to ARPA on Twenty-First Century Intelligent Systems (1994)
    • Logos
  • aaai-icon_ethics-diversity-line-yellowEthics & Diversity
  • Conference talk bubbleConferences & Symposia
    • AAAI Conference
    • AIES AAAI/ACM
    • AIIDE
    • EAAI
    • HCOMP
    • IAAI
    • ICWSM
    • Spring Symposia
    • Summer Symposia
    • Fall Symposia
    • Code of Conduct for Conferences and Events
  • PublicationsPublications
    • AI Magazine
    • Conference Proceedings
    • AAAI Publication Policies & Guidelines
    • Request to Reproduce Copyrighted Materials
    • Contribute
    • Order Proceedings
  • aaai-icon_ai-magazine-line-yellowAI Magazine
  • MembershipMembership
    • Member Login
    • Chapters

  • Career CenterAI Jobs
  • aaai-icon_ai-topics-line-yellowAITopics
  • aaai-icon_contact-line-yellowContact

  • Twitter
  • Facebook
  • LinkedIn
Home > Proceedings / Proceedings of the AAAI Conference on Artificial Intelligence, 36 > No. 2: AAAI-22 Technical Tracks 2

Defending against Model Stealing via Verifying Embedded External Features

February 1, 2023

Authors

Yiming Li

Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China


Linghui Zhu

Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China Research Center of Artificial Intelligence, Peng Cheng Laboratory, Shenzhen, China


Xiaojun Jia

Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China


Yong Jiang

Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China Research Center of Artificial Intelligence, Peng Cheng Laboratory, Shenzhen, China


Shu-Tao Xia

Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China Research Center of Artificial Intelligence, Peng Cheng Laboratory, Shenzhen, China


Xiaochun Cao

Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China


Proceedings:

No. 2: AAAI-22 Technical Tracks 2

Volume

Issue:

Proceedings of the AAAI Conference on Artificial Intelligence, 36

Track:

AAAI Technical Track on Computer Vision II

Downloads:

Download PDF

Abstract:

Obtaining a well-trained model involves expensive data collection and training procedures, therefore the model is a valuable intellectual property. Recent studies revealed that adversaries can `steal' deployed models even when they have no training samples and can not get access to the model parameters or structures. Currently, there were some defense methods to alleviate this threat, mostly by increasing the cost of model stealing. In this paper, we explore the defense from another angle by verifying whether a suspicious model contains the knowledge of defender-specified external features. Specifically, we embed the external features by tempering a few training samples with style transfer. We then train a meta-classifier to determine whether a model is stolen from the victim. This approach is inspired by the understanding that the stolen models should contain the knowledge of features learned by the victim model. We examine our method on both CIFAR-10 and ImageNet datasets. Experimental results demonstrate that our method is effective in detecting different types of model stealing simultaneously, even if the stolen model is obtained via a multi-stage stealing process. The codes for reproducing main results are available at Github (https://github.com/zlh-thu/StealingVerification).

DOI:

10.1609/aaai.v36i2.20036


AAAI

Proceedings of the AAAI Conference on Artificial Intelligence, 36



Topics: AAAI

Primary Sidebar

We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
Cookie SettingsAccept All
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT