.Ensure compatibility along with several platforms, including.NET 6.0,. NET Framework 4.6.2, and.NET Standard 2.0 as well as above.Decrease reliances to avoid model problems and the requirement for tiing redirects.Translating Sound Info.Some of the main capabilities of the SDK is audio transcription. Programmers can easily translate audio data asynchronously or even in real-time. Below is an example of exactly how to translate an audio documents:.making use of AssemblyAI.making use of AssemblyAI.Transcripts.var client = brand-new AssemblyAIClient(" YOUR_API_KEY").var records = wait for client.Transcripts.TranscribeAsync( brand-new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For neighborhood data, identical code may be used to attain transcription.wait for making use of var flow = new FileStream("./ nbc.mp3", FileMode.Open).var records = await client.Transcripts.TranscribeAsync(.flow,.brand-new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Sound Transcription.The SDK likewise supports real-time sound transcription utilizing Streaming Speech-to-Text. This feature is particularly valuable for requests requiring prompt handling of audio data.using AssemblyAI.Realtime.wait for utilizing var scribe = brand-new RealtimeTranscriber( brand-new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Limited: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Last: transcript.Text "). ).await transcriber.ConnectAsync().// Pseudocode for receiving audio coming from a microphone as an example.GetAudio( async (part) => wait for transcriber.SendAudioAsync( portion)).wait for transcriber.CloseAsync().Making Use Of LeMUR for LLM Functions.The SDK combines with LeMUR to make it possible for creators to build huge foreign language design (LLM) apps on voice data. Listed below is an example:.var lemurTaskParams = brand new LemurTaskParams.Trigger="Offer a short conclusion of the transcript.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var action = await client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Sound Intelligence Models.Furthermore, the SDK includes built-in assistance for audio intellect designs, permitting feeling evaluation and also various other enhanced functions.var transcript = await client.Transcripts.TranscribeAsync( new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = accurate. ).foreach (var lead to transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// BENEFICIAL, NEUTRAL, or downside.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").To read more, check out the official AssemblyAI blog.Image resource: Shutterstock.