HomeGuidesRecipesAPI EndpointsRelease NotesCommunity
Log In
Guides

Prompting a Model

Introduction

The layar/gpt/generate/ endpoint is used to prompt the model. This guide will go over the basics to prompting via the API.

Pre-Reqs

Before a document search can be done the API requests must be authenticated. Make sure you have already followed the instructions for importing dependencies and authentication from the Getting Started Guide.

👍

Check Your Imported Modules

Make sure you have imported the requests and json module before proceeding with this guide.

The following header can be used in your request.

header = {'Accept': 'application/json',
          'Content-Type': 'application/json',
          'Authorization': f"Bearer {token}",
          'X-Vyasa-Client': 'layar',
          'X-Vyasa-Data-Providers' : 'sandbox.certara.ai',
	  'X-Vyasa-Data-Fabric' : 'YOUR_FABRIC_ID'
  	 }

Request Body

The body of the request needs a minimal amount of parameters to get a valid response.

body = {'content' : 'What disease is JAK2 protein associated with?',
        'task' : 'generate',
        'sources'[{'documentID' : '1234567ABCDEF'
       }]
        
}

📘

Generate Endpoint Parameters

There are a large variety of parameters that can help control the quality of the prompt as well as the throughput. Please review Generate Parameters for more info.

Utilizing a Specific Model

If you are on Layar 1.10, the body of the request can utilize the model parameter. This allows you to dictate what model to use when generating a response.

Pre-Reqs

Before you can utilize this parameter please review the guide Assigning Models to GPUs.

Request Body

The body of the request utilizing a specific model would look as follows.

body = {'content' : 'What disease is JAK2 protein associated with?',
        'task' : 'generate',
        'model' :'mixtral-instruct-awq',
        'sources' :[{'documentID' : '1234567ABCDEF'}]       
}

📘

Correct Model Name

In order to use the model parameter correctly, you need to ensure you are using the correct model name. For example, if the model name being used in the layar.config file is casperhansen/mixtral-instruct-awqthe name you would use in the request is mixtral-instruct-awq.

POST Request

Now that the body is constructed a POST can be made to the endpoint.

generateURL = f'{envurl}/layar/gpt/generate'

respose = requests.post(generateURL,
                        headers = header,
                        json = body)

#Optional
print(json.dumps(response, indent=4)

📘

Generate Endpoint Response

For more information on the response, please review Generate Response for more info