iOS   发布时间:2022-03-30  发布网站:大佬教程  code.js-code.com
大佬教程收集整理的这篇文章主要介绍了使用MonoTouch在iOS中进行视频捕获大佬教程大佬觉得挺不错的,现在分享给大家,也给大家做个参考。
我有代码在Objective-C中创建,配置和启动视频捕获会话而没有问题.我将示例移植到C#和MonoTouch 4.0.3并遇到一些问题,这里是代码

void Initialize ()
    {   
        // Create notifier delegate class 
        captureVideoDelegate = new CaptureVideoDelegate(this);

        // Create capture session
        captureSession = new AVCaptureSession();
        captureSession.SessionPreset = AVCaptureSession.Preset640x480;

        // Create capture device
        captureDevice = AVCaptureDevice.DefaultDeviceWithMediaType(AVMediaType.Video);

        // Create capture device input
        NSError error;
        captureDeviceInput = new AVCaptureDeviceInput(captureDevice,out error);
        captureSession.AddInput(captureDeviceInput);

        // Create capture device output
        captureVideoOutput = new AVCaptureVideoDataOutput();
        captureSession.AddOutput(captureVideoOutput);
        captureVideoOutput.VideoSetTings.PixelFormat = CVPixelFormatType.CV32BGRA;
        captureVideoOutput.MinFrameDuration = new CMTime(1,30);
        //
        // ISSUE 1
        // In the original Objective-C code I was creaTing a dispatch_queue_t object,passing it to
        // setSampleBufferDelegate:queue message and worked,here I Could not find an equivalent to 
        // the queue mechanism. Also not sure if the delegate should be used like this).
        //
        captureVideoOutput.SetSampleBufferDelegatequeue(captureVideoDelegate,???????);

        // Create preview layer
        previewLayer = AVCaptureVideoPreviewLayer.FromSession(captureSession);
        previewLayer.orientation = AVCaptureVideoOrientation.LandscapeRight;
        //
        // ISSUE 2:
        // Didn't find any VideoGravity related enumeration in MonoTouch (not sure if String will work)
        //
        previewLayer.VideoGravity = "AVLayerVideoGravityResizeAspectFill";
        previewLayer.Frame = new RectangleF(0,1024,768);
        this.View.Layer.AddSublayer(previewLayer);

        // Start capture session
        captureSession.StartRunning();

    }

    #endregion

    public class CaptureVideoDelegate : AVCaptureVideoDataOutputSampleBufferDelegate
    {
        private VirtualDeckViewController mainViewController;

        public CaptureVideoDelegate(VirtualDeckViewController viewController)
        {
            mainViewController = viewController;
        }

        public override void DidOutputSampleBuffer (AVCaptureOutput captureOutput,CMSampleBuffer sampleBuffer,AVCaptureConnection connection)
        {
            // TODO: Implement - see: http://go-mono.com/docs/index.aspx?link=T%3aMonoTouch.Foundation.modelattribute

        }
    }

问题1:
不确定如何正确使用SetSampleBufferDelegatequeue方法中的委托.也找不到与dispatch_queue_t对象相同的机制,该对象在Objective-C中正常工作以传入第二个参数.

问题2:
我没有在MonoTouch库中找到任何VideoGravity枚举,不确定传递具有常量值的字符串是否有效.

我已经找到了解决这个问题的任何线索,但没有明确的样本.任何有关如何在MonoTouch中执行相同操作的示例或信息都将受到高度赞赏.

非常感谢.

解决方法

这是我的代码.好好利用它.我只是删除了重要的东西,所有的初始化都在那里,以及读取样本输出缓冲区.

然后我有代码处理CVImageBuffer形式一个链接自定义ObjC库,如果你需要在Monotouch中处理它,那么你需要加倍努力并将其转换为CGImage或UIImage.在Monotouch(AFAIK)中没有这个功能,所以你需要自己绑定它,从普通的ObjC. ObjC中的样本如下:how to convert a CVImageBufferRef to UIImage

public void InitCapture ()
        {
            try
            {
                // Setup the input
                NSError error = new NSError ();
                captureInput = new AVCaptureDeviceInput (AVCaptureDevice.DefaultDeviceWithMediaType (AVMediaType.Video),out error); 

                // Setup the output
                captureOutput = new AVCaptureVideoDataOutput (); 
                captureOutput.AlwaysDiscardsLateVideoFrames = true; 
                captureOutput.SetSampleBufferDelegateAndQueue (avBufferDelegate,dispatchQueuE);
                captureOutput.MinFrameDuration = new CMTime (1,10);

                // Set the video output to store frame in BGRA (compatible across devices)
                captureOutput.VideoSetTings = new AVVideoSetTings (CVPixelFormatType.CV32BGRA);

                // Create a capture session
                captureSession = new AVCaptureSession ();
                captureSession.SessionPreset = AVCaptureSession.PresetMedium;
                captureSession.AddInput (captureInput);
                captureSession.AddOutput (captureOutput);

                // Setup the preview layer
                prevLayer = new AVCaptureVideoPreviewLayer (captureSession);
                prevLayer.Frame = liveView.bounds;
                prevLayer.VideoGravity = "AVLayerVideoGravityResize"; // image may be slightly distorted,but red bar position will be accurate

                liveView.Layer.AddSublayer (prevLayer);

                StartLiveDecoding ();
            }
            catch (Exception eX)
            {
                Console.WriteLine (ex.ToString ());
            }
        }

public void DidOutputSampleBuffer (AVCaptureOutput captureOutput,MonoTouch.CoreMedia.CMSampleBuffer sampleBuffer,AVCaptureConnection connection)
        {   
            Console.WriteLine ("DidOutputSampleBuffer: enter");

            if (isScAnning) 
            {
                CVImageBuffer imageBuffer = sampleBuffer.GetImageBuffer (); 

                Console.WriteLine ("DidOutputSampleBuffer: calling decode");

                //      NSLog(@"got image w=%d h=%d bpr=%d",CVPixelBufferGetWidth(imageBuffer),CVPixelBufferGetHeight(imageBuffer),CVPixelBufferGetBytesPerRow(imageBuffer));
                // call the decoder
                DecodeImage (imageBuffer);
            }
            else
            {
                Console.WriteLine ("DidOutputSampleBuffer: not scAnning");
            }

            Console.WriteLine ("DidOutputSampleBuffer: quit");
        }

大佬总结

以上是大佬教程为你收集整理的使用MonoTouch在iOS中进行视频捕获全部内容,希望文章能够帮你解决使用MonoTouch在iOS中进行视频捕获所遇到的程序开发问题。

如果觉得大佬教程网站内容还不错,欢迎将大佬教程推荐给程序员好友。

本图文内容来源于网友网络收集整理提供,作为学习参考使用,版权属于原作者。
如您有任何意见或建议可联系处理。小编QQ:384754419,请注明来意。