Search this site
Embedded Files
  • Home
  • Research
  • Portfolio
 
  • Home
  • Research
  • Portfolio
  • More
    • Home
    • Research
    • Portfolio

Back

Publication: Technical vulnerabilities o developing large ML models

Position: Author

Written for Responsible AI Institute

Excerpt:


The explosive popularity of large language models (LLMs) has attracted commercial attention to the possible benefits of developing and integrating ML systems, especially using large models. With building larger and larger models becoming the dominant approach to increasing algorithmic capability. However, the breakneck speed of large model development may have outpaced its corresponding digital security measures. 


While large ML models are often associated with generative AI, the range of utility means its risks have wider applicability than perceived. Also used for its pattern recognition capabilities, the utility of big data can be accentuated by large ML models. Companies can now functionalize their large data sets to generate value by creating large ML models of their datasets. Crucially, as adopting new techniques may not be accompanied by much fanfare, managerial levels could be unaware of this paradigm shift, leading to a mismatch in digital security policies. Therefore, as large ML model becomes the new data processing, many medium to large companies may unknowingly become susceptible to new categories of technical risks.


The applicable digital infrastructure becomes limited with the high technical requirements for training, hosting, and running large models. In conjunction with the fierce competition for compute procurement, the increasing trend towards centralization of digital infrastructure may be unavoidable. Worryingly, this direction toward centralization defeats the traditional digital security paradigm of enclosed enterprise digital infrastructure. Not only does centralization elevate conventional risks such as reduced redundancy, remote access risks, and breaches, but also introduces novel risks native to AI development. While many of these digital security risks have established mitigations, the wide applicability of large ML models means that some companies may be new to the conventional and novel risks, potentially catching them unprepared. 


Many risks are associated with building AI models, and a great amount of attention is paid to the novel impact risks associated with the effects of AI over its life cycle. In addition to the new AI legislations, standards organizations such as NIST and ISO are publishing new guidelines for managing such risks. Consequently, traditional risks such as digital security and other technical risks may lack sufficient attention. This blog will address several key technical risks that arise over the life cycle of large ML models and offer some suggested actions for those specific risks. Fortunately, many of these risks share a fundamental problem with conventional digital security risks; as such, framing these risks through conventional digital security analogues may help establish mitigation measures.



---- Main content redacted due to contractual obligation ----



Three main takeaways from the piece:


With lawmakers, think tanks and private interests scrutinizing AI's potential impact, there has been a corresponding shift in attention toward AI impact control. However, this focus on novel impact risks may unintentionally reduce resources for controlling security and technical risks. As such, all companies should ensure that, at minimum, pre-existing digital security and technical risk controls are maintained.


Being an amalgamation of data, processing, and application, AI's unstable categorization has led to disjunctive security policies and gaps in handling procedures. Company security policies should align with the highest requirements of the three, and at minimum, should frame AI as an extension of big data and assign the appropriate measures.


While AI development opens many new attack vectors, the root causes of problematic situations remain mostly unchanged. These range from human error to software misconfiguration to hardware vulnerabilities. As such, existing security policies should mitigate or prevent problems to a certain degree. Companies should consult existing policies where possible and stay informed of vulnerabilities. When needed, amend existing policies to encompass new risks or create new ones to address new problems. 



Link

Google Sites
Report abuse
Page details
Page updated
Google Sites
Report abuse