大佬教程收集整理的这篇文章主要介绍了OpenGL ES到iOS中的视频(使用iOS 5纹理缓存渲染到纹理),大佬教程大佬觉得挺不错的,现在分享给大家,也给大家做个参考。
我用glReadPixels完成了它,在那里我读取了void *缓冲区中的所有像素,创建了CVPixelBufferRef并将其附加到AVAssetWriterInputPixelBufferAdaptor,但它太慢了,因为readPixels占用了大量的时间.我发现使用FBO和纹理现金你可以做同样的事情,但速度更快.这是Apple使用的drawInRect方法中的代码:
CVReturn err = CVOpenGLESTextureCachecreate(kcfAllocatorDefault,NULL,(__bridge void *)_context,&coreVideoTextureCashE); if (err) { NSAssert(NO,@"Error at CVOpenGLESTextureCachecreate %d"); } CFDictionaryRef empty; // empty value for attr value. CFMutableDictionaryRef attrs2; empty = CFDictionaryCreate(kcfAllocatorDefault,// our empty IOSurface properties Dictionary NULL,&kcfTypeDictionaryKeyCallBACks,&kcfTypeDictionaryValueCallBACks); attrs2 = CFDictionaryCreateMutable(kcfAllocatorDefault,1,&kcfTypeDictionaryValueCallBACks); CFDictionarySETVALue(attrs2,kCVPixelBufferIOSurfacePropertiesKey,empty); //CVPixelBufferPoolCreatePixelBuffer (NULL,[assetWriterPixelBufferInput pixelBufferPool],&renderTarget); CVPixelBufferRef pixiel_bufer4e = NULL; CVPixelBufferCreate(kcfAllocatorDefault,(int)_screenWidth,(int)_screenHeight,kCVPixelFormatType_32BGRA,attrs2,&pixiel_bufer4E); CVOpenGLESTextureRef renderTexture; CVOpenGLESTextureCachecreateTextureFromImage (kcfAllocatorDefault,coreVideoTextureCashe,pixiel_bufer4e,// texture attributes GL_TEXTURE_2D,GL_RGBA,// opengl format (int)_screenWidth,GL_BGRA,// native iOS format GL_UNSIGNED_BYTE,&renderTexturE); CFRelease(attrs2); CFRelease(empty); glBindTexture(CVOpenGLESTextureGetTarget(renderTexturE),CVOpenGLESTextureGetName(renderTexturE)); glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,GL_TEXTURE_2D,CVOpenGLESTextureGetName(renderTexturE),0); CVPixelBufferLockBaseAddress(pixiel_bufer4e,0); if([pixelAdapter appendPixelBuffer:pixiel_bufer4e withPresentationTime:currentTime]) { float result = currentTime.value; NSLog(@"\n\n\4eta dAnni i current time e : %f \n\n",result); currentTime = CMTimeAdd(currentTime,frameLength); } CVPixelBufferUnlockBaseAddress(pixiel_bufer4e,0); CVPixelBufferRelease(pixiel_bufer4E); CFRelease(renderTexturE); CFRelease(coreVideoTextureCashE);
它录制了一个视频并且它非常快,但视频只是黑色我认为textureCasheRef不是正确的,或者我填错了.
作为更新,这是我尝试过的另一种方式.我肯定错过了什么.在viewDidLoad中,在我设置openGL上下文后,我这样做:
CVOpenGLESTextureCachecreate(kcfAllocatorDefault,(__bridge void *)_context,&coreVideoTextureCashE); if (err) { NSAssert(NO,@"Error at CVOpenGLESTextureCachecreate %d"); } //creats the pixel buffer pixel_buffer = NULL; CVPixelBufferPoolCreatePixelBuffer (NULL,[pixelAdapter pixelBufferPool],&pixel_buffer); CVOpenGLESTextureRef renderTexture; CVOpenGLESTextureCachecreateTextureFromImage (kcfAllocatorDefault,pixel_buffer,// texture attributes GL_TEXTURE_2D,// opengl format (int)screenWidth,(int)screenHeight,// native iOS format GL_UNSIGNED_BYTE,&renderTexturE); glBindTexture(CVOpenGLESTextureGetTarget(renderTexturE),CVOpenGLESTextureGetName(renderTexturE)); glTexParameterf(GL_TEXTURE_2D,GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D,GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_FRAMEBUFFER,0);
然后在drawInRect中:我这样做:
if(isRecording&&writerInput.readyForMoreMediaData) { CVPixelBufferLockBaseAddress(pixel_buffer,0); if([pixelAdapter appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) { currentTime = CMTimeAdd(currentTime,frameLength); } CVPixelBufferLockBaseAddress(pixel_buffer,0); CVPixelBufferRelease(pixel_buffer); }
然而它在renderTexture上崩溃了bad_acsess,它不是nil而是0x000000001.
通过下面的代码,我实际设法拉出视频文件,但有一些绿色和红色闪烁.我使用BGRA pixelFormatType.
在这里我创建纹理Cache:
CVReturn err2 = CVOpenGLESTextureCachecreate(kcfAllocatorDefault,&coreVideoTextureCashE); if (err2) { NSLog(@"Error at CVOpenGLESTextureCachecreate %d",err); return; }
然后在drawInRect中我称之为:
if(isRecording&&writerInput.readyForMoreMediaData) { [self cleanUpTextures]; CFDictionaryRef empty; // empty value for attr value. CFMutableDictionaryRef attrs2; empty = CFDictionaryCreate(kcfAllocatorDefault,&kcfTypeDictionaryValueCallBACks); attrs2 = CFDictionaryCreateMutable(kcfAllocatorDefault,&kcfTypeDictionaryValueCallBACks); CFDictionarySETVALue(attrs2,&renderTarget); CVPixelBufferRef pixiel_bufer4e = NULL; CVPixelBufferCreate(kcfAllocatorDefault,&pixiel_bufer4E); CVOpenGLESTextureRef renderTexture; CVOpenGLESTextureCachecreateTextureFromImage (kcfAllocatorDefault,&renderTexturE); CFRelease(attrs2); CFRelease(empty); glBindTexture(CVOpenGLESTextureGetTarget(renderTexturE),0); CVPixelBufferLockBaseAddress(pixiel_bufer4e,0); if([pixelAdapter appendPixelBuffer:pixiel_bufer4e withPresentationTime:currentTime]) { float result = currentTime.value; NSLog(@"\n\n\4eta dAnni i current time e : %f \n\n",result); currentTime = CMTimeAdd(currentTime,frameLength); } CVPixelBufferUnlockBaseAddress(pixiel_bufer4e,0); CVPixelBufferRelease(pixiel_bufer4E); CFRelease(renderTexturE); // CFRelease(coreVideoTextureCashE); }
我知道我可以通过不在这里做所有这些事情来优化这一点,但我想用它来使它工作.在cleanUpTextures中,我用以下方法刷新textureCache:
CVOpenGLESTextureCacheFlush(coreVideoTextureCashe,0);
RGBA的东西可能有问题,或者我不知道,但似乎它仍然有点错误的Cache.
相反,按照我在this answer中描述的内容.我为缓存纹理创建一个像素缓冲区,将该纹理分配给我正在渲染的FBO,然后在每个帧上使用AVAssetWriter的像素缓冲输入附加该像素缓冲区.使用单个像素缓冲区要比每帧重建一个像素缓冲区要快得多.您还希望保留与FBO纹理目标关联的像素缓冲区,而不是在每个帧上关联它.
我将这个录制代码封装在我的开源GPUImage框架中的GPUImageMovieWriter中,如果你想看看它在实践中是如何工作的.正如我在上面链接的答案中指出的那样,以这种方式进行录制会导致极快的编码.
以上是大佬教程为你收集整理的OpenGL ES到iOS中的视频(使用iOS 5纹理缓存渲染到纹理)全部内容,希望文章能够帮你解决OpenGL ES到iOS中的视频(使用iOS 5纹理缓存渲染到纹理)所遇到的程序开发问题。
如果觉得大佬教程网站内容还不错,欢迎将大佬教程推荐给程序员好友。
本图文内容来源于网友网络收集整理提供,作为学习参考使用,版权属于原作者。
如您有任何意见或建议可联系处理。小编QQ:384754419,请注明来意。