Endor Labs Empowers Organizations to Discover and Govern Open Source Artificial Intelligence Models Used in Applications

Endor Labs AI Model Discovery enables organizations to discover pre-trained AI models being used in applications, then evaluate risks and enforce policies for use

Endor Labs Empowers Organizations to Discover and Govern Open Source Artificial Intelligence Models Used in Applications

CONTOS DUNNE COMMUNICATIONS
endorlabs@cdc.agency
+1 (408) 776 1400 +1 (408) 893 8750

Endor Labs, the leader in open source software security, today announced a brand new feature, AI Model Discovery, enabling organizations to discover the AI models already in use across their applications, and to set and enforce security policies over which models are permitted.

“There’s currently a significant gap in the ability to use AI models safely—the traditional Software Composition Analysis (SCA) tools deployed in many enterprises are designed mainly to track open source packages, which means they usually can’t identify risks from local AI models integrated into an application,” said Varun Badhwar, co-founder and CEO of Endor Labs. “Meanwhile, product and engineering teams are increasingly turning to open source AI models to deliver new capabilities for customers. That’s why we’re excited to launch Endor Labs AI Model Discovery, which brings unprecedented security in open source AI deployment.”

It provides the following capabilities:

  1. Discover – scan for and find local AI models already used within Python applications, build a complete inventory of them, and track which teams and applications use them. Today, Endor Labs can identify all AI models from Hugging Face.
  2. Evaluate – analyze AI models based on known risk factors using Endor Scores for security, quality, activity, and popularity, and identify models with questionable sources, practices, or licenses.
  3. Enforce – set guardrails for the use of open source AI models across the organization. Warn developers about policy violations, and block high-risk models from being used within applications.

“While vendors have rushed to incorporate AI into their security tooling, they've largely overlooked a critical need: Securing AI components used in applications,” said Katie Norton, Research Manager, DevSecOps and Software Supply Chain Security at IDC. “IDC research finds that 60% of organizations are choosing open source models over commercial ones for their most important GenAI initiatives, so finding and securing these components is critical for any dependency management program. Vendors like Endor Labs are addressing an urgent need by integrating AI component security directly into software composition analysis (SCA) workflows, while providing meaningful remediation capabilities that don't overwhelm developers.”

Read more here.

#AI Model Discovery is the newest feature from @EndorLabs enabling #AppSec professionals to discover pre-trained #opensource AI models being used in their applications, then evaluate risks and enforce policies over their use


Read Previous

Green Dot to Announce Fourth Quarter 202

Read Next

AV Secures $288 Million Delivery Order o

Add Comment