HTML5   发布时间:2022-04-27  发布网站:大佬教程  code.js-code.com
大佬教程收集整理的这篇文章主要介绍了ios – OpenGL ES 2.0在iPad / iPhone上的视频大佬教程大佬觉得挺不错的,现在分享给大家,也给大家做个参考。
尽管StackOverflow上有很好的信息,我仍然在这里,

我正在为iPad 2上的视频编写一个OpenGL渲染缓冲区(使用iOS 4.3).这正是我正在尝试的:

A)设置一个AVAssetWriterInputPixelBufferAdaptor

>创建一个指向视频文件的AVAssetWriter
>使用适当的设置设置AVAssetWriterInput
>设置一个AVAssetWriterInputPixelBufferAdaptor将数据添加到视频文件

B)使用该AVAssetWriterInputPixelBufferAdaptor将数据写入视频文件

将OpenGL代码渲染到屏幕
>通过glReadPixels获取OpenGL缓冲区
>从OpenGL数据创建一个CVPixelBufferRef
>使用appendPixelBuffer方法将PixelBuffer附加到AVAssetWriterInputPixelBufferAdaptor

但是,我有这样做的问题.我现在的策略是在按下按钮时设置AVAssetWriterInputPixelBufferAdaptor.一旦AVAssetWriterInputPixelBufferAdaptor有效,我设置一个标志来发信号EAGLView创建一个像素缓冲区,并通过appendPixelBuffer将其附加到视频文件中,以获得给定的帧数.

现在我的代码崩溃,因为它试图附加第二个像素缓冲区,给我以错误

@H_489_21@-[__NSCFDictionary appendPixelBuffer:withPresentationTime:]: unrecognized SELEctor sent to instance 0x131db0

这是我的AVAsset设置代码(很多是基于Rudy Aramayo的代码,它在正常图像上工作,但不是为纹理设置的):

@H_489_21@- (void) testVideoWriter { //initialize global info MOVIE_NAME = @"Documents/Movie.mov"; CGSize size = CGSizeMake(480,320); frameLength = CMTimeMake(1,5); currentTime = kCMTimeZero; currentFrame = 0; NSString *MOVIE_PATH = [NSHomeDirectory() StringByAppendingPathComponent:MOVIE_NAME]; NSError *error = nil; unlink([betaCompressionDirectory UTF8String]); videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:betaCompressionDirectory] fileType:AVFileTypeQuickTimeMovie error:&error]; NSDictionary *videoSetTings = [NSDictionary DictionaryWithObjectsAndKeys:AVVideoCodecH264,AVVideoCodecKey,[NSnumber numberWithInt:size.width],AVVideoWidthKey,[NSnumber numberWithInt:size.height],AVVideoHeightKey,nil]; writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSetTings:videoSetTings]; //writerInput.expectsMediaDataInRealTime = NO; NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary DictionaryWithObjectsAndKeys: [NSnumber numberWithInt:kCVPixelFormatType_32BGRA],kCVPixelBufferPixelFormatTypeKey,nil]; adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary]; [adaptor retain]; [videoWriter addInput:writerInput]; [videoWriter startWriTing]; [videoWriter startSessionAtsourceTime:kCMTimeZero]; VIDEO_WRITER_IS_READY = true; }

好的,现在我的videoWriter和适配器设置好了,我告诉我的OpenGL渲染器为每个帧创建一个像素缓冲区:

@H_489_21@- (void) captureScreenVideo { if (!writerInput.readyForMoreMediaData) { return; } CGSize esize = CGSizeMake(eagl.backingWidth,eagl.backingHeight); NSInteger myDataLength = esize.width * esize.height * 4; GLuint *buffer = (GLuint *) malloc(myDataLength); glReadPixels(0,esize.width,esize.height,GL_RGBA,GL_UNSIGNED_BYTE,buffer); CVPixelBufferRef pixel_buffer = NULL; CVPixelBufferCreateWithBytes (NULL,kCVPixelFormatType_32BGRA,buffer,4 * esize.width,NULL,&pixel_buffer); /* DON'T FREE THIS BEFORE USING pixel_buffer! */ //free(buffer); if(![adaptor appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) { NSLog(@"FAIL"); } else { NSLog(@"success:%d",currentFramE); currentTime = CMTimeAdd(currentTime,frameLength); } free(buffer); CVPixelBufferRelease(pixel_buffer); } currentFrame++; if (currentFrame > MAX_FRAMES) { VIDEO_WRITER_IS_READY = false; [writerInput markAsFinished]; [videoWriter finishWriTing]; [videoWriter release]; [self moveVideoToSavedPhotos]; } }

最后,我将视频移动到相机卷:

@H_489_21@- (void) moveVideoToSavedPhotos { ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init]; NSString *localVid = [NSHomeDirectory() StringByAppendingPathComponent:MOVIE_NAME]; NSURL* fileURL = [NSURL fileURLWithPath:localVid]; [library writeVideoAtPathToSavedPhotosAlbum:fileURL completionBlock:^(NSURL *assetURL,NSError *error) { if (error) { NSLog(@"%@: Error saving context: %@",[self class],[error localizedDescription]); } }]; [library release]; }

但是,正如我所说,我正在打电话给appendPixelBuffer.

对不起,发送这么多的代码,但我真的不知道我做错了什么.更新将图像写入视频的项目似乎是微不足道的,但是我无法通过glReadPixels创建像素缓冲区并附加它.这让我疯狂!如果任何人有任何建议或OpenGL的工作代码示例 – >视频将是惊人的…谢谢!

解决方法

我刚刚得到类似于这个在我的开源 GPUImage框架中工作的东西,基于上面的代码,所以我以为我会提供我的工作解决方案.在我的情况下,我可以像Srikumar一样使用像素缓冲池,而不是为每个帧手动创建的像素缓冲区.

我首先配置要录制的电影:

@H_489_21@NSError *error = nil; assetWriter = [[AVAssetWriter alloc] initWithURL:movieURL fileType:AVFileTypeAppleM4V error:&error]; if (error != nil) { NSLog(@"Error: %@",error); } NSMutableDictionary * outputSetTings = [[NSMutableDictionary alloc] init]; [outputSetTings setObject: AVVideoCodecH264 forKey: AVVideoCodecKey]; [outputSetTings setObject: [NSnumber numberWithInt: videoSize.width] forKey: AVVideoWidthKey]; [outputSetTings setObject: [NSnumber numberWithInt: videoSize.height] forKey: AVVideoHeightKey]; assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSetTings:outputSetTings]; assetWriterVideoInput.expectsMediaDataInRealTime = YES; // You need to use BGRA for the video in order to get realtime encoding. I use a color-swizzling shader to line up glReadPixels' normal RGBA output with the movie input's BGRA. NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary DictionaryWithObjectsAndKeys: [NSnumber numberWithInt:kCVPixelFormatType_32BGRA],[NSnumber numberWithInt:videoSize.width],kCVPixelBufferWidthKey,[NSnumber numberWithInt:videoSize.height],kCVPixelBufferHeightKey,nil]; assetWriterPixelBufferInput = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary]; [assetWriter addInput:assetWriterVideoInput];

然后使用此代码使用glReadPixels()来抓取每个渲染的帧:

@H_489_21@CVPixelBufferRef pixel_buffer = NULL; CVReturn status = CVPixelBufferPoolCreatePixelBuffer (NULL,[assetWriterPixelBufferInput pixelBufferPool],&pixel_buffer); if ((pixel_buffer == NULL) || (status != kCVReturnsuccess)) { return; } else { CVPixelBufferLockBaseAddress(pixel_buffer,0); GLubyte *pixelBufferData = (GLubyte *)CVPixelBufferGetBaseAddress(pixel_buffer); glReadPixels(0,videoSize.width,videoSize.height,pixelBufferData); } // May need to add a check here,because if two consecutive times with the same value are added to the movie,it aborts recording CMTime currentTime = CMTimeMakeWithSeconds([[NSDate date] timeIntervalSinceDate:startTime],120); if(![assetWriterPixelBufferInput appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) { NSLog(@"Problem appending pixel buffer at time: %lld",currentTime.value); } else { // NSLog(@"Recorded pixel buffer at time: %lld",currentTime.value); } CVPixelBufferUnlockBaseAddress(pixel_buffer,0); CVPixelBufferRelease(pixel_buffer);

我注意到的一件事是,如果我试图附加两个具有相同整数时间值的像素缓冲区(在所提供的基础上),则整个录制将失败,输入将永远不会再占用另一个像素缓冲区.同样,如果我尝试在从池中检索后附加像素缓冲区失败,它将中止录像.因此,上述代码中的早期救助.

除了上述代码之外,我使用一个颜色调整的着色器将OpenGL ES场景中的RGBA渲染转换为BGRA,以便AVAssetWriter进行快速编码.有了这个,我可以在iPhone 4上以30 FPS录制640×480的视频.

同样,所有的代码可以在GPUImage存储库中,在GPUImageMovieWriter类下面.

大佬总结

以上是大佬教程为你收集整理的ios – OpenGL ES 2.0在iPad / iPhone上的视频全部内容,希望文章能够帮你解决ios – OpenGL ES 2.0在iPad / iPhone上的视频所遇到的程序开发问题。

如果觉得大佬教程网站内容还不错,欢迎将大佬教程推荐给程序员好友。

本图文内容来源于网友网络收集整理提供,作为学习参考使用,版权属于原作者。
如您有任何意见或建议可联系处理。小编QQ:384754419,请注明来意。