Use Case: Conduit for Healthcare Applications
Data Sharing with the AI
Conduit does not transmit actual data to AI models. With the traditional LLM systems like ChatGPT, you would send a CSV file containing data along with the question. However, with Conduit, the process is different and involves three key steps:
Sending Metadata Description: Conduit sends a description of the table's metadata, which includes the structure and types of data in the columns.
Formulating the Question: The question itself is sent to the AI.
Requesting a Python Script: A request is made for a Python script that will generate the answer.
In this way, Conduit only shares metadata, not the actual data, with the large language model (LLM).
Controlled Access to Internal Data
Moreover, access to internal data is stringently controlled within Conduit. Here’s how it ensures data security:
Authentication and Authorization: Conduit uses robust authentication and authorization mechanisms to safeguard your internal data. It integrates with your data systems through SQL and APIs, ensuring that different users have different access levels.
User-Specific Data Retrieval: When a user asks a question, the LLM generates a program to answer it. Conduit then provides the user's identity to your internal data API, which returns only the data that the user is authorized to see.
Executing the Program: Conduit runs the generated program on the subset of data that the user is permitted to access, ensuring that the response is based on authorized data only.
Mitigating Risks of Sharing Sensitive Information
The system's design minimizes the risk of clients or patients sharing sensitive information. While this depends on specific use cases, the combined effect of the first two points ensures a setup that significantly reduces the potential for unauthorized access or exposure of sensitive information. By sharing only metadata and tightly controlling access to actual data, Conduit provides a secure and compliant environment for data interaction.
Last updated