HTML5   发布时间:2022-04-27  发布网站:大佬教程  code.js-code.com
大佬教程收集整理的这篇文章主要介绍了ios – 完成音频队列播放的精确时间大佬教程大佬觉得挺不错的,现在分享给大家,也给大家做个参考。
我正在使用音频队列来播放音频文件.我需要精确计时最后一个缓冲区.
我需要在播放最后一个缓冲区后的150ms-200ms内通知一个函数

通过回调方法我知道有多少缓冲区被排队@H_675_4@

我知道缓冲区大小,我知道上一个缓冲区填充的字节数.@H_675_4@

首先,我初始化一些缓冲区,然后用音频数据填充缓冲区,然后将它们排队.当音频队列需要填充缓冲区时,它会调用回调并用数据填充缓冲区.@H_675_4@

当没有更多可用的音频数据时,Audio Queue会向我发送最后一个空缓冲区,所以我用我拥有的任何数据填充它:@H_675_4@

if (sharedCache.numberOf@R_878_10586@lPackets>0)
            {
                if (currentlyReadingBufferIndex==[sharedCache.baseAudioCache count]-1) {
                    inBuffer->mAudioDataByteSize = (UInt32)bytesFilled;
                    lastEnqueudBufferSize=bytesFilled;
                    err=AudioQueueEnqueueBuffer(inAQ,inBuffer,(UInt32)packetsFilled,packetDescs);
                    if (err) {
                        [self failWithErrorCode:err customError:AP_AUdio_QUEUE_ENQUEUE_Failed];
                    }
                    printf("if that was the last free packet description,then enqueue the buffer\n");
                    //go to the next item on keepbuffer array
                    isBufferFilled=YES;
                    [self incrementBufferUsedCount];
                    return;
                }
            }

当Audio Queue通过回调请求更多数据而我没有更多数据时,我开始倒计时缓冲区.如果缓冲区计数等于零,这意味着要播放的航班上只剩下一个缓冲区,则完成片刻播放时我会尝试停止音频队列.@H_675_4@

-(void)decrementBufferUsedCount
{

    if (buffersUsed>0) {
        buffersUsed--;
        printf("buffer on the queue %i\n",buffersUsed);
        if (buffersUsed==0) {
            NSLog(@"playBACk is finished\n");
            // end playBACk
            isPlayBACkDone=YES;
            double sampleRate = dataFormat.mSampleRate;
            double bufferDuration = lastEnqueudBufferSize/ sampleRate;
            double estimatedTimeNeded=bufferDuration*1;
            [self performSELEctor:@SELEctor(stopPlayer) withObject:nil afterDelay:estimatedTimeNeded];
        }
    }
}  

-(void)stopPlayer
{
    @synchronized(self)
    {
        state=AP_STOPPING;
    }
    err=AudioQueueStop(queue,TRUE);
    if (err) {
        [self failWithErrorCode:err customError:AP_AUdio_QUEUE_STOP_Failed];
    }
    else
    {
        @synchronized(self)
        {
            state=AP_STOPPED;
            NSLog(@"Stopped\n");
        }

然而,似乎我无在这里得到精确的时间.上面的代码会提前阻止玩家@H_675_4@

如果我也提前跟进音频切换@H_675_4@

double bufferDuration = XMAQDefaultBufSize/ sampleRate;
double estimatedTimeNeded=bufferDuration*1;

如果增加1到2,因为缓冲区大小很大我得到一些延迟,似乎1.5是现在的最佳值但我不明白为什么lastEnqueudBufferSize / sampleRate不是wotking@H_675_4@

音频文件和缓冲区的详细信息:@H_675_4@

Audio file has 22050 sample rate
#define knumberPlayBACkBuffers  4
#define kAQDefaultBufSize 16384
it is a vbr file format with no bitrate information available

解决方法

编辑:

我找到了一种更简单的方法来获得相同的结果(/ -10ms).使用AudioQueueNewOutput()设置输出队列后,初始化要在输出回调中使用的AudioQueueTimelineRef. (我的第一个方法中包含ticksToSeconds函数)不要忘记导入< mach / mach_time.h>@H_675_4@

//After AudioQueueNewOutput()
AudioQueueTimelineRef timeLine;     //ivar
AudioQueueCreateTimeline(queue,self.timeLinE);

然后在输出回调中调用AudioQueueGetCurrentTime().警告:队列必须播放有效的时间戳.因此,对于非常短的文件,您可能需要使用下面的AudioQueueProcessingTap方法.@H_675_4@

Audiotimestamp timestamp;
AudioQueueGetCurrentTime(queue,self->timeLine,&timestamp,null);

时间戳将当前播放的样本与当前机器时间联系在一起.有了这些信息,我们可以在将来播放最后一个样本时获得准确的机器时间.@H_675_4@

Float64 samplesLeft    = self->frameCount - timestamp.mSampleTime;//samples in file - current sample
Float64 secondsLeft    = samplesLeft / self->sampleRate;          //seconds of audio to play
UInt64  ticksLeft      = secondsLeft / ticksToSeconds();          //seconds converted to machine ticks  
UInt64  machTimeFinish = timestamp.mHostTime + ticksLeft;         //machine time of first sample + ticks left

现在我们拥有了这个未来的机器时间,我们可以使用它来准确地计算您想要做的任何事情.@H_675_4@

UInt64 currentMachTime = mach_absolute_time();
Uint64 ticksFromNow = machTimeFinish - currentMachTime;
float secondsFromNow = ticksFromNow * ticksToSeconds();
dispatch_after(dispatch_time(DISPATCH_TIME_Now,(int64_t)(secondsFromNow * NSEC_PER_SEC)),dispatch_get_main_queue(),^{
    //do the thing!!!
    printf("Giggety");
});

如果GCD dispatch_async不够准确,有办法设置precision timer@H_675_4@

使用AudioQueueProcessingTap@H_675_4@

您可以从AudioQueueProcessingTap获得相当低的响应时间.首先,你的回调基本上会置于音频流之间. MyObject类型就是代码中的self(这是ARC桥接在这里以获得函数内部的自我).检查ioFlags会告诉您流何时开始并完成.输出回调的iotimestamp描述了回调中的第一个样本将来会触及发言者的时间.所以,如果你想在这里得到准确的话.我添加了一些便功能,用于将机器时间转换为秒.@H_675_4@

#import <mach/mach_time.h>

double getTimeConversion(){
    double timecon;
    mach_timebase_info_data_t Tinfo;
    kern_return_t kerror;
    kerror = mach_timebase_info(&Tinfo);
    timecon = (doublE)Tinfo.numer / (doublE)Tinfo.denom;

    return  timecon;
}
double ticksToSeconds(){
    static double ticksToSeconds = 0;
    if (!ticksToSeconds) {
        ticksToSeconds = getTimeConversion() * 0.000000001;
    }
    return ticksToSeconds;
}

void processingTapCallBACk(
                 void *                          inClientData,AudioQueueProcessingTapRef      inAQTap,UInt32                          innumberFrames,Audiotimestamp *                iotimestamp,UInt32 *                        ioFlags,UInt32 *                        outnumberFrames,AudioBufferList *               ioData){

    MyObject *self = (__bridge Object *)inClientData;
    AudioQueueProcessingTapGet@R_618_9016@eAudio(inAQTap,innumberFrames,iotimestamp,ioFlags,outnumberFrames,ioData);
    if (*ioFlags ==  kAudioQueueProcessingTap_EndOfStream) {
        Float64 sampTime;
        UInt32 frameCount;
        AudioQueueProcessingTapGetQueueTime(inAQTap,&sampTime,&frameCount);
        Float64 samplesInThisCallBACk = self->frameCount - sampleTime;//file sampleCount - queue current sample
        //double secondsInCallBACk = outnumberFrames / (doublE)self->sampleRate; outnumberFrames was inaccurate
        double secondsInCallBACk = * samplesInThisCallBACk / (doublE)self->sampleRate;
        uint64_t timeOfLastSampleLeavingSpeaker = iotimestamp->mHostTime + (secondsInCallBACk / ticksToSeconds());
        [self lastSampleDoneAt:timeOfLastSampleLeavingSpeaker];
    }
}

-(void)lastSampleDoneAt:(uint64_t)lastSampTime{
    uint64_t currentTime = mach_absolute_time();
    if (lastSampTime > currentTimE) {
        double secondsFromNow = (lastSampTime - currentTimE) * ticksToSeconds();
        dispatch_after(dispatch_time(DISPATCH_TIME_Now,^{
            //do the thing!!!
        });
    }
    else{
        //do the thing!!!
    }
}

您可以在AudioQueueNewOutput之后和AudioQueueStart之前将其设置为这样.注意将桥接self传递给inClientData参数.队列实际上将self保持为void *以在回调中使用,我们将其桥接回回调中的objective-C对象.@H_675_4@

AudioStreamBasicDescription format;
AudioQueueProcessingTapRef tapRef;
UInt32 maxFrames = 0;
AudioQueueProcessingTapNew(queue,processingTapCallBACk,(__bridge void *)self,kAudioQueueProcessingTap_PostEffects,&maxFrames,&format,&tapRef);

一旦文件启动,您就可以获得最终机器时间.还有点清洁.@H_675_4@

void processingTapCallBACk(
                 void *                          inClientData,ioData);
    if (*ioFlags ==  kAudioQueueProcessingTap_StartOfStream) {

        uint64_t timeOfLastSampleLeavingSpeaker = iotimestamp->mHostTime + (self->audioDurSeconds / ticksToSeconds());
        [self lastSampleDoneAt:timeOfLastSampleLeavingSpeaker];
    }
}

大佬总结

以上是大佬教程为你收集整理的ios – 完成音频队列播放的精确时间全部内容,希望文章能够帮你解决ios – 完成音频队列播放的精确时间所遇到的程序开发问题。

如果觉得大佬教程网站内容还不错,欢迎将大佬教程推荐给程序员好友。

本图文内容来源于网友网络收集整理提供,作为学习参考使用,版权属于原作者。
如您有任何意见或建议可联系处理。小编QQ:384754419,请注明来意。