一尘不染

将图像转换为CVPixelBuffer以进行机器学习Swift

swift

我正在尝试让Apple的示例核心ML模型在2017年WWDC上演示以正常运行。我正在使用GoogLeNet尝试对图像进行分类(请参阅Apple机器学习页面)。该模型将CVPixelBuffer作为输入。我有一个用于本演示的名为imageSample.jpg的图像。我的代码如下:

        var sample = UIImage(named: "imageSample")?.cgImage
        let bufferThree = getCVPixelBuffer(sample!)

        let model = GoogLeNetPlaces()
        guard let output = try? model.prediction(input: GoogLeNetPlacesInput.init(sceneImage: bufferThree!)) else {
            fatalError("Unexpected runtime error.")
        }

        print(output.sceneLabel)

我总是在输出而不是图像分类中遇到意外的运行时错误。我的转换图像的代码如下:

func getCVPixelBuffer(_ image: CGImage) -> CVPixelBuffer? {
        let imageWidth = Int(image.width)
        let imageHeight = Int(image.height)

        let attributes : [NSObject:AnyObject] = [
            kCVPixelBufferCGImageCompatibilityKey : true as AnyObject,
            kCVPixelBufferCGBitmapContextCompatibilityKey : true as AnyObject
        ]

        var pxbuffer: CVPixelBuffer? = nil
        CVPixelBufferCreate(kCFAllocatorDefault,
                            imageWidth,
                            imageHeight,
                            kCVPixelFormatType_32ARGB,
                            attributes as CFDictionary?,
                            &pxbuffer)

        if let _pxbuffer = pxbuffer {
            let flags = CVPixelBufferLockFlags(rawValue: 0)
            CVPixelBufferLockBaseAddress(_pxbuffer, flags)
            let pxdata = CVPixelBufferGetBaseAddress(_pxbuffer)

            let rgbColorSpace = CGColorSpaceCreateDeviceRGB();
            let context = CGContext(data: pxdata,
                                    width: imageWidth,
                                    height: imageHeight,
                                    bitsPerComponent: 8,
                                    bytesPerRow: CVPixelBufferGetBytesPerRow(_pxbuffer),
                                    space: rgbColorSpace,
                                    bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue)

            if let _context = context {
                _context.draw(image, in: CGRect.init(x: 0, y: 0, width: imageWidth, height: imageHeight))
            }
            else {
                CVPixelBufferUnlockBaseAddress(_pxbuffer, flags);
                return nil
            }

            CVPixelBufferUnlockBaseAddress(_pxbuffer, flags);
            return _pxbuffer;
        }

        return nil
    }

我从以前的帖子中获得了此代码。我知道该代码可能不正确,但是我自己也不知道如何执行此操作。我相信这是包含错误的部分。该模型要求以下类型的输入:Image<RGB,224,224>


阅读 497

收藏
2020-07-07

共1个答案

一尘不染

您无需费心处理图像就可以将Core ML模型与图像一起使用-
新的Vision框架可以为您做到这一点。

import Vision
import CoreML

let model = try VNCoreMLModel(for: MyCoreMLGeneratedModelClass().model)
let request = VNCoreMLRequest(model: model, completionHandler: myResultsMethod)
let handler = VNImageRequestHandler(url: myImageURL)
handler.perform([request])

func myResultsMethod(request: VNRequest, error: Error?) {
    guard let results = request.results as? [VNClassificationObservation]
        else { fatalError("huh") }
    for classification in results {
        print(classification.identifier, // the scene label
              classification.confidence)
    }

}

关于VisionWWDC17会议应该有更多信息-
今天下午。

2020-07-07