HomeGuidesRecipesAPI EndpointsRelease NotesCommunity
Log In
Guides

Embedding Sources

Introduction

Embedding sources allows Certara Generative AI to utilize RAG for more accurate responses. When a generate request is sent to Layar, the vector database is queried to see if the source has already been embedded. Users have the option to embed the document manually, with more control over how the source is embedded.

Pre-Reqs

Before a document search can be done the API requests must be authenticated. Make sure you have already followed the instructions for importing dependencies and authentication from the Getting Started Guide.

👍

Check Your Imported Modules

Make sure you have imported the requests and json module before proceeding with this guide.

The following header can be used in your request.

header = {'Accept': 'application/json',
          'Content-Type': 'application/json',
          'Authorization': f"Bearer {token}",
          'X-Vyasa-Client': 'layar',
          'X-Vyasa-Data-Providers' : 'sandbox.certara.ai',
	  'X-Vyasa-Data-Fabric' : 'YOUR_FABRIC_ID'
  	 }

Parameters

Lets go over the parameters the endpoint can ingest.

{
  "forceRefresh" : False,
  "savedListId": "string",
  "documentId": "string",
  "provider": "string".
  "splitter" : {
    "paragraph" : {
      "min_chunk_size" : 100,
      "max_chunk_size" : 500
    }
    "sentence" : {
      "chunk_size" : 500,
      "chunk_overlap" : 100
    }
  }
  "overwrite_embeddings" : False
}  

savedListId

A string that denotes the set ID. Providing this will cause the endpoint to embed all sources in the set.

documentId

A string that denotes the document or table ID. Providing this will cause the endpoint to embed a single source.

provider

A string that denotes the provider that the sources live under. IE Master-Pubmed.Vyasa.com

splitter

A dictionary that has multiple sub values that determines how the source is chunked. You can chunk by paragraph or sentence

paragraph

A dictionary of two integers min_chunk_size and max_chunk_size. This allows you to chunk by paragraph. If you do not provide these values, the embedding model will determine the best size for the chunks.

min_chunk_size

An integer that determines the minimum token length of the chunk.

max_chunk_size

An integer that determines the maximum token length of the chunk.

sentence

A dictionary of two integers chunk_size and chunk_overlap. This allows you to chunk by sentence. If you do not provide the these values, the embedding model will determine best size and overlap for the chunks.

chunk_size

An integer that determines the token length of the chunk.

chunk_overlap

An integer that determines how many tokens overlap in adjacent chunks.

🚧

Paragraph or Sentence

You can only choose to split by paragraph OR sentence, you can not use both.

forceRefresh

A boolean value, if the embedding fails and this value is set to True the embedding will be retried.

overwrite_embeddings

A boolean value, if the embedding exists and the value is set to True.

🚧

Params Required for Retries

In order to create a new embedding for ones that already exist, you need to provide both forceRefresh and overwrite_embeddings.

Choosing an Endpoint

There are two endpoints that can be used for embedding: /layar/sourceDocument/{id}/createEmbeddings or /layar/savedList/{id}/createEmbeddings

The one you will use depends on if you need to embed a single source or a set. For the sake of this guide, the example will embed a set.

Creating Your Request Parameters

Since we are embedding a set, we will be using /layar/savedList/{id}/createEmbeddings

In order to properly use the above parameters, you need to use the params argument in requests.

params = {
  "savedList" : setId,
  "splitter" : {
    "paragraph": {
    	'min_chunk_size' : 100
  }}
}

Embedding Request

With our requestparams created, we can now post the request.

embedSetUri = f'{envurl}/layar/savedList/{setid}/createEmbeddings'

Response = Requests.post(embedSetUri,
                        headers = header,
                        params = params)

Embedding Response

The following is a normal response.

{'attributes': {'gptMessage': 'Embeddings successfully generated for provided '
                              'sources.',
                'gptStatus': 'success'},
 'createdByUser': 25006,
 'dataFabricId': 'fabric_5VU7JTUG2U9KV8KPT2QDL6LSUM_9',
 'dateIndexed': '2024-07-25T21:41:49.978+0000',
 'datePublished': '2024-07-25T21:41:49.978+0000',
 'dateUpdated': '2024-07-25T21:41:52.610+0000',
 'id': 'AZDr19kbF_MgIWqZ-n1R',
 'ids': {'documentId': 'AZDrrGvkF_MgIWqZ-nws', 'provider': 'dldev01.vyasa.com'},
 'name': 'Create Vector Store Embeddings',
 'percentComplete': 100.0,
 'status': 'COMPLETE',
 'type': 'createEmbeddings'}

When you use forceRefresh and overwrite_embeddingsit will re-queue the embedding job. You can tell that it's a new job by the 'id'value.

{'attributes': {},
 'createdByUser': 25006,
 'dataFabricId': 'fabric_5VU7JTUG2U9KV8KPT2QDL6LSUM_9',
 'id': 'AZDr4tYLF_MgIWqZ-n1h',
 'ids': {'documentId': 'AZDrrGvkF_MgIWqZ-nws', 'provider': 'dldev01.vyasa.com'},
 'name': 'Create Vector Store Embeddings',
 'status': 'RUNNING',
 'type': 'createEmbeddings'}

If you run the request with forceRefresh, you can re-run the request WITHOUT the parameters to see the status of the new job. If you didn't include overwrite_embeddingsin the first request you will see the message "Embeddings already exist. Set 'overwrite_embeddings' to True if you want to overwrite."

{'attributes': {'gptMessage': 'Embeddings already exist. Set '
                              "'overwrite_embeddings' to True if you want to "
                              'overwrite.',
                'gptStatus': 'warning'},
 'createdByUser': 25006,
 'dataFabricId': 'fabric_5VU7JTUG2U9KV8KPT2QDL6LSUM_9',
 'dateIndexed': '2024-07-25T21:53:50.090+0000',
 'datePublished': '2024-07-25T21:53:50.090+0000',
 'dateUpdated': '2024-07-25T21:53:51.322+0000',
 'id': 'AZDr4tYLF_MgIWqZ-n1h',
 'ids': {'documentId': 'AZDrrGvkF_MgIWqZ-nws', 'provider': 'dldev01.vyasa.com'},
 'name': 'Create Vector Store Embeddings',
 'percentComplete': 100.0,
 'status': 'COMPLETE',
 'type': 'createEmbeddings'}

If you included both forceRefresh AND overwrite_embeddingsyou will get the following.

{'attributes': {'gptMessage': 'Embeddings successfully generated for provided '
                              'sources.',
                'gptStatus': 'success'},
 'createdByUser': 25006,
 'dataFabricId': 'fabric_5VU7JTUG2U9KV8KPT2QDL6LSUM_9',
 'dateIndexed': '2024-07-25T21:53:50.090+0000',
 'datePublished': '2024-07-25T21:53:50.090+0000',
 'dateUpdated': '2024-07-25T21:53:51.322+0000',
 'id': 'AZDr4tYLF_MgIWqZ-n1h',
 'ids': {'documentId': 'AZDrrGvkF_MgIWqZ-nws', 'provider': 'dldev01.vyasa.com'},
 'name': 'Create Vector Store Embeddings',
 'percentComplete': 100.0,
 'status': 'COMPLETE',
 'type': 'createEmbeddings'}