This guide provides step-by-step instructions on how to create a test event, send the payload to the endpoint, verify data, and stop execution.
Firstly, create or select an existing event on VDP. This event will be processed through on ‘flow’, and pushed to Akamai. The amount of data pushed is determined by the number of PIDs within the initial request. Bear in mind, more PIDs in a request mean more data to be processed and transferred to Akamai.
Once you’ve created the event, the payload you need to send to our VDP endpoint should resemble the following JSON object:
{
"id": 2046240,
"start_datetime": `2023-04-24T08:00:00.000Z`,
"end_datetime": `2023-04-24T09:30:00.000Z`,
"description": "TEST: Examino Title",
"primary_source_specifier": "srt://test_primary_source_ip",
"secondary_source_specifier": "srt://test_primary_source_ip",
"publications": [
{
"__union_type": "hls_publication",
"video_selectors": [
{
"bitrate_identifier": 5001
}
],
"audio_selectors": [
{
"__union_type": "standard_audio_selector",
"bitrate_identifier": 5001,
"pid": 258
}
],
"hls_provider_details": {
"__union_type": "akamai_hls",
"primary_base_url": "http://primary_base_url_1/",
"backup_base_url": "http://backup_base_url_1/"
},
"master_playlist_url": "https://akamai_endpoint_url_1.m3u8"
},
{
"__union_type": "hls_publication",
"video_selectors": [
{
"bitrate_identifier": 5001
}
],
"audio_selectors": [
{
"__union_type": "standard_audio_selector",
"bitrate_identifier": 5001,
"pid": 259
}
],
"hls_provider_details": {
"__union_type": "akamai_hls",
"primary_base_url": "http://primary_base_url_2/",
"backup_base_url": "http://primary_base_url_2/"
},
"master_playlist_url": "https://akamai_endpoint_url_2.m3u8"
}
]
}
Note: In this payload:
Within the publications array, each object represents a unique PID. The more PIDs present, the more containers will be initiated - each with their specific Akamai endpoints defined in the master_playlist_url. The different PIDs will be uploaded to the different Akamai endpoints.
Please be aware that all information, including Akamai endpoints and event details, must be valid. If an event contains no data, our Examino containers will not initiate. Invalid or absent data can lead to a failure in launching the event.
To orchestrate or initiate the event in an Examino container, which will subsequently upload the relevant data to Akamai, you would need to interact with our PUT endpoint. The request to this endpoint should include the body of data that was generated on the VDP site:
PUT https://api.statsperform.video/optavoice/vdp/event/{id}
In the above API call, replace {id} with the unique identifier from your event data set that was created on the VDP site.
Assuming you’ve correctly sent the payload, and the endpoint is functioning as expected, the API response will have a status code of 201. This status code denotes a successful operation where the request has been fulfilled, and a new resource was created as a result.
If you receive any status code other than 201, it indicates an error occurred during the process. The exact nature of the error can be discerned by referring to the corresponding error code. For a comprehensive list of potential error codes and their explanations, please refer to the VDP API error list in our documentation.
Ensure to troubleshoot any errors as they arise to maintain the smooth operation of the event management process. Regularly verify the validity of the data you’re working with, as incorrect or incomplete data can often be the source of these issues.
Following a successful PUT request, you should be able to view the event under the ‘channel’ tab on the Flow page, labeled with the name you provided in the ‘description’ key of your data set. Initially, you will only see one scheduled event, regardless of the number of PIDs included in your data.
This is because, despite each PID representing a separate container, the system consolidates these into a singular scheduled event prior to its commencement. Don’t be alarmed - this is by design and intended to simplify your viewing experience.
However, after the event has started, the view will change. You will see individual containers corresponding to each PID appear on the platform. Each of these containers represents a unique data stream, and they are being fed into Akamai endpoints as defined in your initial payload.
Our scheduling system will orchestrate the process, launching as many containers as there are PIDs detailed in your payload. It’s worth noting that while you can set the start time to a past moment, there are certain limitations to consider.
Specifically, you can only backdate the start time by up to one hour. This restriction is in place because our scheduler operates by scanning the past hour when determining the event starts. As a result, any start time set more than an hour in the past will be overlooked, and the corresponding event won’t be triggered.
Therefore, when planning your events, be mindful of these timing considerations. They will influence not only when your containers start but also the smooth operation of the event as a whole. The precise and thoughtful timing of events plays a crucial role in ensuring an efficient and successful process.
Once your event is underway, it’s crucial to ensure that data is being correctly handled and disseminated. There are two main methods for doing this:
Inspect the data directly on the Flow page by selecting the event. This will enable you to view the data currently being processed in the containers. A successful display of data here indicates that our internal systems are operating correctly, meaning that data is accurately flowing into the containers as intended.
Access the Akamai link, which will redirect you to hlsjs.video-dev.org/demo/. Here, you’ll be able to initiate the data stream. If the stream begins without any issues, it indicates that the data publishing process from the Flow containers to Akamai is functioning flawlessly. The stream on this page is sourced directly from Akamai, meaning that a successful stream initiation here demonstrates that the data has been correctly processed and transferred.
Using these methods, you can maintain oversight of the data flow process, from initial scheduling to eventual streaming. These verification steps allow you to quickly identify and address any potential issues, helping to ensure smooth and reliable event execution.
In addition to initiating the event, our scheduler also has the responsibility of ending it. Once the time specified in the end_datetime of your payload arrives, the scheduler will automatically halt all active containers associated with the event. This means the streaming of data to the Akamai endpoints will cease, concluding the event.
This automatic cessation mechanism is designed to ensure that your events are managed with precision, adhering to the timings you defined. Therefore, you can confidently set your event parameters, knowing that the system will reliably execute the start and stop operations without the need for manual intervention.