AI in the Australian Security Industry
Is the AI genie out of the bottle? As the Security Industry explores potential applications for Artificial Intelligence Artificial, the question remains how will the industry ensure that it doesn’t lose control of this new intelligence, whose potential for good or evil is not yet known.
Artificial Intelligence (AI) is transforming various sectors of the economy and society, including the private security industry. AI can offer benefits such as enhancing security operations, improving threat detection and prevention, and reducing costs and risks. However, AI also poses challenges and risks such as ethical, legal and social implications, as well as potential loss of control and accountability. Therefore, it is important to ensure that AI is used in a safe, responsible and ethical manner and that there are adequate regulations and standards to govern its development and deployment.
The Australian Security Industry Association Ltd (ASIAL) plays a key role in driving Australian standards, developing codes of conduct and raising the level of professionalism within the security industry. ASIAL also recognises the opportunities and challenges of AI for the security industry and has been advocating for the development of a national AI strategy and framework that would address the ethical, legal and social aspects of AI.
ASIAL has also been involved in various initiatives and projects related to AI and security, such as:
- Participating in the development of the Australian Government’s AI Ethics Principles, which are designed to ensure AI is safe, secure and reliable.
- Collaborating with the Australian Institute of Criminology (AIC) and the Australian Federal Police (AFP) to conduct research on the use and impact of AI and biometrics in the security industry.
- Partnering with the Australian Computer Society (ACS) and the Australian Information Industry Association (AIIA) to host events and webinars on AI and security, such as the AI and Security Summit 2021.
- Providing guidance and resources to its members on how to adopt and implement AI in their security operations, such as the ASIAL Go App, which provides access to dedicated news feeds, member resources, events, publications, polls and more.
Australia is not the only country exploring the use and regulation of AI in the security industry. The European Union, Canada and the United States, have been developing and implementing policies and strategies to address the opportunities and challenges of AI, especially in high-risk sectors like security.
For example, the EU has proposed a risk-based approach to regulate AI, which would classify AI systems into four categories: unacceptable, high-risk, limited-risk, and minimal-risk. Unacceptable AI systems would be banned, such as those that use social scoring or mass surveillance. High-risk AI systems would be subject to strict requirements, such as those that are used in critical infrastructure, law enforcement, or biometric identification. Limited-risk AI systems would have to comply with transparency obligations, such as those that are used in chatbots or voice assistants. Minimal-risk AI systems would have no specific requirements, such as those that are used in video games or spam filters.
Similarly, Canada has developed a Directive on Automated Decision-Making, which applies to all federal departments and agencies that use AI systems to make administrative decisions that affect individuals or businesses. The directive requires that AI systems are assessed for their impact on human rights, privacy, security and accountability. They are assigned a level of risk from low to very high. Depending on the level of risk, AI systems have to meet different standards and safeguards, such as human intervention, explanation, accuracy and auditability.
In the United States, there is no comprehensive federal legislation on AI, but there are various bills and initiatives at the state and local levels that aim to regulate or restrict the use of AI in certain domains, such as facial recognition, criminal justice, or employment. For instance, some states and cities, such as California, Massachusetts and Portland, have banned or limited the use of facial recognition technology by law enforcement or public agencies, citing concerns over privacy, civil rights, and accuracy.
As AI becomes more prevalent and powerful in the security industry, it is essential that Australia keeps pace with the global developments and trends, ensuring that its AI policies and practices are consistent and compatible with those of its allies and partners. By doing so, Australia can leverage the benefits of AI for its security and prosperity, while mitigating the risks and harms of AI for its society and values.