Calls assume fetchWithPayment already wraps your HTTP client so x402 payments replay without
manual intervention.
1. Reference the video by URL
const baseUrl = process.env.HORIZON_BASE_URL ?? 'https://api.horizon.new/v1';
const videoResponse = await fetchWithPayment(`${baseUrl}/extract/video`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
sourceUrl: 'https://cdn.example.com/video/launch-recap.mp4',
sourceName: 'Launch Recap',
options: {
transcriptionModel: 'whisper-large-v3',
captureSlides: true,
segmentLength: 900,
},
webhookUrl: 'https://example.com/webhooks/horizon/extraction',
}),
});
const videoJob = await videoResponse.json();
console.log('video extraction job', videoJob.jobId, videoJob.statusUrl);
2. Handle synchronous completion
if (videoJob.status === 'completed' && videoJob.result) {
console.log('Inline transcript length', videoJob.result.transcript.segments.length);
console.log('Slide frames', videoJob.result.slides?.length ?? 0);
}
3. Poll for longer videos
let status;
do {
status = await fetchWithPayment(videoJob.statusUrl).then((res) => res.json());
if (status.state === 'processing') {
console.log('Processing frames…', status.progress?.percentComplete ?? 0);
await new Promise((resolve) => setTimeout(resolve, 5000));
}
} while (status.state === 'processing');
if (status.state !== 'succeeded') {
throw new Error(`Video extraction failed: ${status.error?.code ?? 'unknown'}`);
}
const { transcript, slides, chunks } = status.result;
4. Put results to work
- Pipe
transcript.segments into your search index, aligning timestamps with video playback.
- Use captured
slides or keyframes to populate knowledge bases or highlight reels.
- Store
chunks alongside your own annotations so /search can surface the video in downstream workflows.