These AI Use Principles apply to Suprema’s access control devices.
Committed to Secure and Fair Access.
Our access control devices utilize AI-based algorithms to enhance the accuracy and processing speed of fingerprint and facial authentication.
These AI technologies are used solely to accurately and efficiently identify authorized users and to utilize such identification results for authentication and access authorization purposes, and are not used for surveillance, behavioral analysis, risk assessment, or the evaluation or inference of individuals’ social characteristics.
AI Governance and Accountability Principles
We adopt fairness, transparency, accountability, and robust data security as core principles throughout the design, development, and operation of our access control devices and related AI functions.
AI is used as a tool to enhance the accuracy and processing speed of identifying individuals for access control purposes, and ultimate responsibility for device operation, access decisions, and system management always remains with humans and the operating entity.
We also recognize the risk-based regulatory framework of the EU Artificial Intelligence Act (EU AI Act) and, taking into account the characteristics and intended use of AI-based biometric technologies, we review relevant requirements and respond to them in a phased and appropriate manner.
In this process, we prioritize the implementation of applicable transparency and accountability measures, including human oversight, procedures for raising objections, and the provision of alternative authentication methods.
Where necessary, administrator intervention and alternative authentication options are used to protect users’ rights and safety.
All users have the right to access facilities in an environment that is operated in an ethical and responsible manner.
We publicly disclose these principles as outlined above and commit to complying with them on an ongoing basis.
1. Access Control Device Overview & Ethical Design
The purpose of our access control devices is to accurately identify authorized individuals and control access to facilities. AI algorithms are used solely to enhance the accuracy and processing speed of identifying individuals through fingerprint and facial recognition, and such identification results are used for authentication and access authorization. The access control devices do not identify or infer sensitive characteristics such as race, gender, or age.
No. AI used in access control devices is exclusively applied for basic identification purposes and is strictly designed not to perform surveillance, biometric categorisation, risk assessment, or any functions that could negatively affect individual rights or public order.
Yes. We strictly adhere to core AI ethical principles, including fairness, transparency, and accountability, throughout the development and operation of our access control devices and related AI functions. These principles are reflected in our data processing and algorithm validation processes.
Access control devices are designed with consideration for privacy and human rights and comply with applicable laws and regulations, including GDPR, as well as internal security policies. Access control management and user consent procedures are applied to prevent misuse.
2. Bias, Fairness & Accountability
Facial recognition models are trained using data that includes diverse races, ages, and genders. Data distributions are reviewed and supplemented as necessary to prevent imbalance and ensure representativeness.
Recognition performance is periodically reviewed across different groups to assess potential bias. Where necessary, data improvements and retraining are conducted to maintain fair performance for all users.
AI is used solely as a tool to enhance the accuracy and processing speed of identifying individuals for access control purposes, and final decisions and accountability always remain with humans, such as system administrators. All authentication results may be subject to human review, and roles and responsibilities are clearly defined.
AI operates independently of protected characteristics such as race, gender, or age and is designed not to identify, infer, or categorise individuals based on protected such attributes
AI is a tool, and ultimate responsibility for access control device operation and management lies with the operating entity. Human oversight procedures are in place to ensure prompt response in the event of errors.
The access control devices are used solely for access control and authentication purposes and are not used for employment-related decision-making. Alternative authentication methods (e.g., cards, mobile authentication) are provided to support accessibility for persons with disabilities.
3. Transparency & Automated Processing
All authentication results and processing activities of access control devices are automatically logged and managed transparently. Administrators can review authentication records and decision bases through system logs, which may be verified through internal audit procedures when necessary.
Accuracy and reliability are continuously improved through internal reviews and expert feedback.
The operating entity must obtain user consent during registration. Users may choose alternative authentication methods if they do not wish to use AI-based authentication. Feedback on AI performance may be submitted through our customer support channels.
Authentication results are clearly displayed, allowing operators to easily verify matches against registered information. Detailed explanations can be provided through logs and records when necessary.
AI performs automated decisions only within a limited scope, such as access authentication. Final approval and policy settings are managed by the operating entity, and all results are subject to effective human oversight when required.
Installation and operation manuals provide guidance on user registration, authentication, and access control device management functions. Relevant documentation is available upon request.
4. Data Management & Security
AI model training primarily uses internally collected and maintained data, as well as lawfully available external data sources.
Yes. Only data collected and stored with valid consent in accordance with applicable laws and regulations are used, and retention periods and usage scopes are managed in line with internal standards.
Data accuracy and integrity are ensured through data cleansing and validation processes. During transmission and storage, encryption, checksum verification, and hash value comparisons are used to maintain integrity.
Data is collected from diverse sources and conditions and is regularly reviewed and supplemented to minimize bias and ensure representativeness. Expert reviews and statistical analyses are used to select relevant attributes.
Data is securely deleted in accordance with applicable policies once retention periods expire, and systems are managed to ensure that no unnecessary data remains.
5. Risk Management
Processes are in place to provide administrators of access control devices with manuals or response measures in the event of risks related to data protection, device performance, or errors in AI functions.
Access control devices use energy-efficient hardware and algorithms, and continuous optimization efforts are made to reduce unnecessary resource consumption and minimize environmental impact.
6. AI Literacy & User Capability Enhancement
AI literacy refers to the ability to understand the basic principles and operation of AI, use it effectively, and critically evaluate AI-generated outcomes. It goes beyond technical usage skills to include understanding AI’s limitations and capabilities and making ethical judgments. We strive to ensure that all stakeholders to the extent appropriate to their respective roles and responsibilities, involved in the design, development, operation, management, and use of access control devices and related AI functions can acquire AI literacy through education, guidance, or other appropriate means. We view AI literacy as a practical capability rather than a declarative concept, with a focus on recognizing and responding to AI malfunctions, bias, privacy risks, and potential legal liabilities. AI literacy initiatives are conducted in an appropriate scope and manner, taking into account the roles and responsibilities of stakeholders, including employees, system operators (administrators), and customer administrators. Typical content may include, but is not limited to:
Employees: Basic AI principles, potential bias and errors, accountability structures
Operators (Administrators): Handling authentication errors, alternative authentication procedures, log interpretation
Customer Administrators: Limitations of AI use, legal and ethical responsibilities, prevention of misuse
January 22, 2026