EzDev.org

GPUImage

GPUImage - An open source iOS framework for GPU-based image and video processing Introducing the GPUImage framework | Sunset Lake Software


GPUimage port for android

Has anyone ported this to android yet? More the framework than the shaders. Stuff like bringing camera data into openGL. I have worked with it on iOS and it is very fast. Any help is much appreciated.


Source: (StackOverflow)

How to add external framework GPUImage framework? [closed]

I am developing iPhone application using iOS. I need to add GPUImage framework, i followed this url for adding purpose. http://www.sunsetlakesoftware.com/2012/02/12/introducing-gpuimage-framework

Now i drag and drop GPUImage xcodeproject file in my project. In this project product folder having libGPImage.a file. It showing empty folder, in this file missing. It showing like red color. I am unable to access GPUImage class files. why it is happening.

Thanks,


Source: (StackOverflow)

Luma Key (create alpha mask from image) for iOS

I'm building an app that allows people to upload an image of themselves against a white background and the app will create a silhouette of the person.

I'm having a hard time keying out the background. I am using the GPUImage framework and the GPUImageChromaKeyBlendFilter works great for colors, but if you get to white/black it is really hard to key out one of those colors. If I set the key to white or black it keys both the same.

Any advice?


Source: (StackOverflow)

How do I animate in/out a gaussian blur effect in iOS?

For the whole iOS 7 feel, I want to apply a blur effect to a specific portion of the screen to obfuscate it, but I don't want to just throw the blur on instantly, I want to animate it in and animate it out so the user almost sees the blur effect being applied.

Almost as if in Photoshop you changed the gaussian blur value bit by bit from 0 to 10, instead of 0 to 10 in one go.

I've tried a few solutions to this, the most popular suggestion being to simply put the blurred view on top of a non-blurred view, and then lower the alpha value of the blurred view.

This works okay, but not very eye pleasing as there's no transition, it's just an overlay. Example:

enter image description here

What would be a better way to achieve such an effect? I'm familiar with GPUImage, but not sure how to accomplish it with that.

It'd also be great if I could control what percentage of the blur it is at, so it could be applied interactively from the user. (e.g.: user drags halfway, the blur is half applied, etc.)


Source: (StackOverflow)

Key differences between Core Image and GPUImage

What are the major differences between the Core Image and GPUImage frameworks (besides GPUImage being open source)? At a glance their interfaces seem pretty similar... Applying a series of filters to an input to create an output. I see a few small differences, such as the easy to use LookupFilter that GPUImage has. I am trying to figure out why someone would choose one over the other for a photo filtering application.


Source: (StackOverflow)

How to get corners using GPUImageHarrisCornerDetectionFilter

I am trying to get the corner points from a still image using GPUImageHarrisCornerDetectionFilter.

I have looked at the example code from the project, I have looked at the documentation, and I have looked at this post that is about the same thing: GPUImage Harris Corner Detection on an existing UIImage gives a black screen output

But I can't make it work - and I have a hard time understanding how this is supposed to work with still images.

What I have at this point is this:

func harrisCorners() -> [CGPoint] {
    var points = [CGPoint]()

    let stillImageSource: GPUImagePicture = GPUImagePicture(image: self.image)
    let filter = GPUImageHarrisCornerDetectionFilter()

    filter.cornersDetectedBlock = { (cornerArray:UnsafeMutablePointer<GLfloat>, cornersDetected:UInt, frameTime:CMTime) in
        for index in 0..<Int(cornersDetected) {
            points.append(CGPoint(x:CGFloat(cornerArray[index * 2]), y:CGFloat(cornerArray[(index * 2) + 1])))
        }
    }

    filter.forceProcessingAtSize(self.image.size)
    stillImageSource.addTarget(filter)
    stillImageSource.processImage()

    return points
}

This function always returns [] so it's obviously not working.

An interesting detail - I compiled the FilterShowcaseSwift project from GPUImage examples, and the filter fails to find very clear corners, like on a sheet of paper on a black background.


Source: (StackOverflow)

Android: Sugar ORM No Such Table Exception

I am getting the No Such table exception when i am Using Sugar ORM with GPU image Android Library. I am using Gradle and Android Studio. Once i remove GPU image this issue is solved. So i don't know whats causing this exception. Details about this exception are also being discussed in this git issue and it seems a lot of people are still facing it.

My crash log is posted below

> 10-09 11:30:21.511 4326-4831/com.example.app E/SQLiteLog: (10) Failed
> to do file read, got: 0, amt: 100, last Errno: 2 10-09 11:30:26.506
> 4326-4831/com.example.app E/SQLiteLog: (1) no such table: IMAGE 10-09
> 11:30:26.516 4326-4831/com.example.app E/AndroidRuntime: FATAL
> EXCEPTION: AsyncTask #1 10-09 11:30:26.516 4326-4831/com.example.app
> E/AndroidRuntime: java.lang.RuntimeException: An error occured while
> executing doInBackground() 10-09 11:30:26.516
> 4326-4831/com.example.app E/AndroidRuntime:     at
> android.os.AsyncTask$3.done(AsyncTask.java:299) 10-09 11:30:26.516
> 4326-4831/com.example.app E/AndroidRuntime:     at
> java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:352)
> 10-09 11:30:26.516 4326-4831/com.example.app E/AndroidRuntime:     at
> java.util.concurrent.FutureTask.setException(FutureTask.java:219)
> 10-09 11:30:26.516 4326-4831/com.example.app E/AndroidRuntime:     at
> java.util.concurrent.FutureTask.run(FutureTask.java:239) 10-09
> 11:30:26.516 4326-4831/com.example.app E/AndroidRuntime:     at
> android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:230) 10-09
> 11:30:26.516 4326-4831/com.example.app E/AndroidRuntime:     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1080)
> 10-09 11:30:26.516 4326-4831/com.example.app E/AndroidRuntime:     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:573)
> 10-09 11:30:26.516 4326-4831/com.example.app E/AndroidRuntime:     at
> java.lang.Thread.run(Thread.java:838) 10-09 11:30:26.516
> 4326-4831/com.example.app E/AndroidRuntime:  Caused by:
> android.database.sqlite.SQLiteException: no such table: IMAGE (code
> 1): , while compiling: SELECT * FROM IMAGE 10-09 11:30:26.516
> 4326-4831/com.example.app E/AndroidRuntime:     at
> android.database.sqlite.SQLiteConnection.nativePrepareStatement(Native
> Method) 10-09 11:30:26.516 4326-4831/com.example.app E/AndroidRuntime:
> at
> android.database.sqlite.SQLiteConnection.acquirePreparedStatement(SQLiteConnection.java:886)
> 10-09 11:30:26.516 4326-4831/com.example.app E/AndroidRuntime:     at
> android.database.sqlite.SQLiteConnection.prepare(SQLiteConnection.java:497)
> 10-09 11:30:26.516 4326-4831/com.example.app E/AndroidRuntime:     at
> android.database.sqlite.SQLiteSession.prepare(SQLiteSession.java:588)
> 10-09 11:30:26.516 4326-4831/com.example.app E/AndroidRuntime:     at
> android.database.sqlite.SQLiteProgram.<init>(SQLiteProgram.java:58)
> 10-09 11:30:26.516 4326-4831/com.example.app E/AndroidRuntime:     at
> android.database.sqlite.SQLiteQuery.<init>(SQLiteQuery.java:37) 10-09
> 11:30:26.516 4326-4831/com.example.app E/AndroidRuntime:     at
> android.database.sqlite.SQLiteDirectCursorDriver.query(SQLiteDirectCursorDriver.java:44)
> 10-09 11:30:26.516 4326-4831/com.example.app E/AndroidRuntime:     at
> android.database.sqlite.SQLiteDatabase.rawQueryWithFactory(SQLiteDatabase.java:1314)
> 10-09 11:30:26.516 4326-4831/com.example.app E/AndroidRuntime:     at
> android.database.sqlite.SQLiteDatabase.queryWithFactory(SQLiteDatabase.java:1161)
> 10-09 11:30:26.516 4326-4831/com.example.app E/AndroidRuntime:     at
> android.database.sqlite.SQLiteDatabase.query(SQLiteDatabase.java:1032)
> 10-09 11:30:26.516 4326-4831/com.example.app E/AndroidRuntime:     at
> android.database.sqlite.SQLiteDatabase.query(SQLiteDatabase.java:1238)

Source: (StackOverflow)

Applying filters on a video file

I want to apply filters (effects) on a video file while the video is playing.

I'm currently using @BradLarson 's (great) GPUImage framework to do so, the problem here is that the framework doesn't support audio playback while playing the video.

So I have two options:

1) Dive into the GPUImage code and change the GPUImageMovie so it will also process the audio buffers. This requires the knowledge of syncing the audio & video frames, and unfortunately i don't have it. I saw some hacks that try to play the audio with AVAudioPlayer but with a lot of sync problems.

2) Use CoreImage framework instead of GPUImage.

So I want to take a look at the second option of using the native iOS CoreImage and CIFilter to do the job.

The problem is, I couldn't find any example of how to do this with CIFilter, how do I apply filters on a video from a file?

Do I must use an AVAssetReader to read the video and process each frame? if so I'm back to my first problem of syncing the audio & video.
Or is there a way to apply the filters chain directly on the video or on the preview layer?

Appreciate any help :)


Source: (StackOverflow)

"GPUImage.h" not found

I am trying to set up GPUImage in a project but I am not able to track down why I'm getting the error: "GPUImage.h" not found. I have added the framework, setup the target dependency, added the Header Search path as: framework, and added other linker flag -ObjC. Still no luck. I have included my super simple test project here and linked below if anyone wants to take a look.

I know this must be documented and basic, but I searched on GitHub but did not find reference to this particular issue.

Thanks for reading.

owolf.net/uploads/StackOverflow/GPUITest.zip


Source: (StackOverflow)

GPUImage filtering video

This is a follow-up to a previous but only marginally related question

I'm using the GPUImage library to apply filters to still photos and videos in my camera app. Almost everything is working nicely. One remaining issue that I have not been able to resolve is as follows:

  1. I capture a GPUImageMovie
  2. I write it to the file system
  3. I read it from the file system and apply a new filter
  4. I write it to a different URL in the filesystem

What is saved to that new URL is a video with the correct duration but no movement. When I hit play, it's just a still image, I think the first frame of the video. This happens anytime I apply any filter to a video that I retrieve from the filesystem. Applying a filter to live recording video works just fine. My code is below.

Can anyone tell me how I can modify this to save the entire original video with a filter applied?

- (void)applyProcessingToVideoAtIndexPath:(NSIndexPath *)indexPath withFilter:(GPUImageFilter *)selectedFilter
{
    NSArray *urls = [self.videoURLsByIndexPaths objectForKey:self.indexPathForDisplayedImage];
    NSURL *url = [urls lastObject];
    self.editedMovie = [[GPUImageMovie alloc] initWithURL:url];
    assert(!!self.editedMovie);
    [self.editedMovie addTarget:selectedFilter]; // apply the user-selected filter to the file
    NSURL *movieURL = [self generatedMovieURL];
    // A different movie writer than the one I was using for live video capture.
    movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL:movieURL size:CGSizeMake(640.0, 640.0)];
    [selectedFilter addTarget:movieWriter];
    movieWriter.shouldPassthroughAudio = YES;
    self.editedMovie.audioEncodingTarget = movieWriter;
    [self.editedMovie enableSynchronizedEncodingUsingMovieWriter:movieWriter];
    [movieWriter startRecording];
    [self.editedMovie startProcessing];

    __weak GPUImageMovieWriter *weakWriter = movieWriter;
    __weak CreateContentViewController *weakSelf = self;
    [movieWriter setCompletionBlock:^{
        [selectedFilter removeTarget:weakWriter];
        [weakWriter finishRecordingWithCompletionHandler:^{

            NSArray *urls = [weakSelf.videoURLsByIndexPaths objectForKey:weakSelf.indexPathForDisplayedImage];
            urls = [urls arrayByAddingObject:movieURL];
            NSMutableDictionary *mutableVideoURLs = [weakSelf.videoURLsByIndexPaths mutableCopy];
            [mutableVideoURLs setObject:urls forKey:weakSelf.indexPathForDisplayedImage];
            weakSelf.videoURLsByIndexPaths = mutableVideoURLs;
            dispatch_sync(dispatch_get_main_queue(), ^{
                [self.filmRoll reloadData];
                [weakSelf showPlayerLayerForURL:movieURL onTopOfImageView:weakSelf.displayedImageView];
            });
        }];
    }];
}

Source: (StackOverflow)

GPUImage equivalent of cv::findContours

My app uses opencv's cv::findContours on a binary image. I now need to make it realtime. GPUImage has a cannyedge filter but I couldn't find anything related to findContours. Does GPUImage have anything that closely resembles findContours? If not, can someone suggest an alternative? Thanks


Source: (StackOverflow)

Using GPUImage to Recreate iOS 7 Glass Effect

I am trying to use the iOS 7 style glass effect in my glass by applying image effects to a screenshot of a MKMapView. This UIImage category, provided by Apple, is what I am using as a baseline. This method desaturates the source image, applies a tint color, and blurs heavily using the input vals:

[image applyBlurWithRadius:10.0
                 tintColor:[UIColor colorWithRed:229/255.0f green:246/255.0f blue:255/255.0f alpha:0.33] 
     saturationDeltaFactor:0.66
                 maskImage:nil];

This produces the effect I am looking for, but takes way too long — between .3 and .5 seconds to render on an iPhone 4.

enter image description here

I would like to use the excellent GPUImage as my preliminary attempts have been about 5-10 times faster, but I just can't seem to get it right.

GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:image];

GPUImageSaturationFilter *saturationFilter = [[GPUImageSaturationFilter alloc] init];
saturationFilter.saturation = 0.33; // 1.0 - 0.66;
[stillImageSource addTarget:saturationFilter];

GPUImageMonochromeFilter *monochromeFilter = [[GPUImageMonochromeFilter alloc] init];
[monochromeFilter setColor:(GPUVector4){229/255.0f, 246/255.0f, 1.0f, 0.33f}];
[monochromeFilter setIntensity:0.2];
[saturationFilter addTarget:monochromeFilter];

GPUImageFastBlurFilter *blurFilter = [[GPUImageFastBlurFilter alloc] init];
blurFilter.blurSize = 2;
blurFilter.blurPasses = 3;
[monochromeFilter addTarget:blurFilter];

[saturationFilter prepareForImageCapture];
[monochromeFilter prepareForImageCapture];

[stillImageSource processImage];
image = [blurFilter imageFromCurrentlyProcessedOutput];

This produces an image which is close, but not quite there

enter image description here

The blur doesn't seem to be deep enough, but when I try to increase the blurSize above, it becomes grid-like, almost like a kaleidoscope. You can actually see the grid here by zooming in on the second image. The tint-color I am trying to mimic seems to just wash out the image instead of overlaying and blending, which I think the Apple sample is doing.

I have tried to setup the filters according to comments made by @BradLarson in another SO question. Am I using the wrong GPUImage filters to reproduce this effect, or am I just setting them up wrong?


Source: (StackOverflow)

How to Implement GPUImageMaskFilter using GPUImage

I need to Cut from the Full Image using the mask and created the masked Image.

Full Image +Mask Image=enter image description here

I tried the following:

UIImage *imgMask = [UIImage imageNamed:@"Mask.png"];
UIImage *imgBgImage = [UIImage imageNamed:@"Full.png"];



GPUImageMaskFilter *maskingFilter = [[GPUImageMaskFilter alloc] init];


GPUImagePicture * maskGpuImage = [[GPUImagePicture alloc] initWithImage:imgMask ];

GPUImagePicture *FullGpuImage = [[GPUImagePicture alloc] initWithImage:imgBgImage ];




[maskGpuImage addTarget:maskingFilter];
[maskGpuImage processImage];


[maskingFilter useNextFrameForImageCapture];


[FullGpuImage addTarget:maskingFilter];
[FullGpuImage processImage];



UIImage *OutputImage = [maskingFilter imageFromCurrentFramebuffer];

But , my generated output image is: enter image description here

Please guys join hands. Cheers.

Also,Thanks to BradLarson.


Source: (StackOverflow)

GPUImage add hue/color adjustments per-RGB channel (adjust reds to be more pink or orange)

Stumped trying to adjust the hue of a specific channel (or perhaps, more specifically, a specific range of colors - in this case, reds). Looking at the hue filter, I thought maybe I might get somewhere by commenting out the green and blue modifiers, impacting the changes on only the red channel:

 precision highp float;
 varying highp vec2 textureCoordinate;

 uniform sampler2D inputImageTexture;
 uniform mediump float hueAdjust;
 const highp  vec4  kRGBToYPrime = vec4 (0.299, 0.587, 0.114, 0.0);
 const highp  vec4  kRGBToI     = vec4 (0.595716, -0.274453, -0.321263, 0.0);
 const highp  vec4  kRGBToQ     = vec4 (0.211456, -0.522591, 0.31135, 0.0);

 const highp  vec4  kYIQToR   = vec4 (1.0, 0.9563, 0.6210, 0.0);
 const highp  vec4  kYIQToG   = vec4 (1.0, -0.2721, -0.6474, 0.0);
 const highp  vec4  kYIQToB   = vec4 (1.0, -1.1070, 1.7046, 0.0);

 void main ()
 {
     // Sample the input pixel
     highp vec4 color   = texture2D(inputImageTexture, textureCoordinate);

     // Convert to YIQ
     highp float   YPrime  = dot (color, kRGBToYPrime);
     highp float   I      = dot (color, kRGBToI);
     highp float   Q      = dot (color, kRGBToQ);

     // Calculate the hue and chroma
     highp float   hue     = atan (Q, I);
     highp float   chroma  = sqrt (I * I + Q * Q);

     // Make the user's adjustments
     hue += (-hueAdjust); //why negative rotation?

     // Convert back to YIQ
     Q = chroma * sin (hue);
     I = chroma * cos (hue);

     // Convert back to RGB
     highp vec4    yIQ   = vec4 (YPrime, I, Q, 0.0);
     color.r = dot (yIQ, kYIQToR);
//  -->    color.g = dot (yIQ, kYIQToG); 
//  -->   color.b = dot (yIQ, kYIQToB);

     // Save the result
     gl_FragColor = color;
 }
);

But that just leaves the photo either grey/blue and washed-out or purplish green. Am I on the right track? If not, how can I modify this filter to affect individual channels while leaving the others intact?

Some examples:

Original, and the effect I'm trying to achieve:

(The second image is almost unnoticeably different, however the red channel's hue has been made slightly more pinker. I need to be able to adjust it between pink<->orange).

But here's what I get with B and G commented out:

(Left side: <0º, right side: >0º)

It looks to me like it's not affecting the hue of the reds in the way I'd like it to; possibly I'm approaching this incorrectly, or if I'm on the right track, this code isn't correctly adjusting the red channel hue?

(I also tried to achieve this effect using the GPUImageColorMatrixFilter, but I didn't get very far with it).

Edit: here's my current iteration of the shader using @VB_overflow's code + GPUImage wrapper, which is functionally affecting the input image in a way similar to what I'm aiming for:

#import "GPUImageSkinToneFilter.h"

@implementation GPUImageSkinToneFilter

NSString *const kGPUImageSkinToneFragmentShaderString = SHADER_STRING
(
 varying highp vec2 textureCoordinate;

 uniform sampler2D inputImageTexture;

 // [-1;1] <=> [pink;orange]
 uniform highp float skinToneAdjust; // will make reds more pink

 // Other parameters
 uniform mediump float skinHue;
 uniform mediump float skinHueThreshold;
 uniform mediump float maxHueShift;
 uniform mediump float maxSaturationShift;

 // RGB <-> HSV conversion, thanks to http://lolengine.net/blog/2013/07/27/rgb-to-hsv-in-glsl
 highp vec3 rgb2hsv(highp vec3 c)
{
    highp vec4 K = vec4(0.0, -1.0 / 3.0, 2.0 / 3.0, -1.0);
    highp vec4 p = mix(vec4(c.bg, K.wz), vec4(c.gb, K.xy), step(c.b, c.g));
    highp vec4 q = mix(vec4(p.xyw, c.r), vec4(c.r, p.yzx), step(p.x, c.r));

    highp float d = q.x - min(q.w, q.y);
    highp float e = 1.0e-10;
    return vec3(abs(q.z + (q.w - q.y) / (6.0 * d + e)), d / (q.x + e), q.x);
}

 // HSV <-> RGB conversion, thanks to http://lolengine.net/blog/2013/07/27/rgb-to-hsv-in-glsl
 highp vec3 hsv2rgb(highp vec3 c)
{
    highp vec4 K = vec4(1.0, 2.0 / 3.0, 1.0 / 3.0, 3.0);
    highp vec3 p = abs(fract(c.xxx + K.xyz) * 6.0 - K.www);
    return c.z * mix(K.xxx, clamp(p - K.xxx, 0.0, 1.0), c.y);
}

 // Main
 void main ()
{

    // Sample the input pixel
    highp vec4 colorRGB = texture2D(inputImageTexture, textureCoordinate);

    // Convert color to HSV, extract hue
    highp vec3 colorHSV = rgb2hsv(colorRGB.rgb);
    highp float hue = colorHSV.x;

    // check how far from skin hue
    highp float dist = hue - skinHue;
    if (dist > 0.5)
        dist -= 1.0;
    if (dist < -0.5)
        dist += 1.0;
    dist = abs(dist)/0.5; // normalized to [0,1]

    // Apply Gaussian like filter
    highp float weight = exp(-dist*dist*skinHueThreshold);
    weight = clamp(weight, 0.0, 1.0);

    // We want more orange, so increase saturation
    if (skinToneAdjust > 0.0)
        colorHSV.y += skinToneAdjust * weight * maxSaturationShift;
    // we want more pinks, so decrease hue
    else
        colorHSV.x += skinToneAdjust * weight * maxHueShift;

    // final color
    highp vec3 finalColorRGB = hsv2rgb(colorHSV.rgb);

    // display
    gl_FragColor = vec4(finalColorRGB, 1.0);
}
);

#pragma mark -
#pragma mark Initialization and teardown
@synthesize skinToneAdjust;
@synthesize skinHue;
@synthesize skinHueThreshold;
@synthesize maxHueShift;
@synthesize maxSaturationShift;

- (id)init
{
    if(! (self = [super initWithFragmentShaderFromString:kGPUImageSkinToneFragmentShaderString]) )
    {
        return nil;
    }

    skinToneAdjustUniform = [filterProgram uniformIndex:@"skinToneAdjust"];
    skinHueUniform = [filterProgram uniformIndex:@"skinHue"];
    skinHueThresholdUniform = [filterProgram uniformIndex:@"skinHueThreshold"];
    maxHueShiftUniform = [filterProgram uniformIndex:@"maxHueShift"];
    maxSaturationShiftUniform = [filterProgram uniformIndex:@"maxSaturationShift"];

    self.skinHue = 0.05;
    self.skinHueThreshold = 50.0;
    self.maxHueShift = 0.14;
    self.maxSaturationShift = 0.25;

    return self;
}

#pragma mark -
#pragma mark Accessors

- (void)setSkinToneAdjust:(CGFloat)newValue
{
    skinToneAdjust = newValue;
    [self setFloat:newValue forUniform:skinToneAdjustUniform program:filterProgram];
}

- (void)setSkinHue:(CGFloat)newValue
{
    skinHue = newValue;
    [self setFloat:newValue forUniform:skinHueUniform program:filterProgram];
}

- (void)setSkinHueThreshold:(CGFloat)newValue
{
    skinHueThreshold = newValue;
    [self setFloat:newValue forUniform:skinHueThresholdUniform program:filterProgram];
}

- (void)setMaxHueShift:(CGFloat)newValue
{
    maxHueShift = newValue;
    [self setFloat:newValue forUniform:maxHueShiftUniform program:filterProgram];
}

- (void)setMaxSaturationShift:(CGFloat)newValue
{
    maxSaturationShift = newValue;
    [self setFloat:newValue forUniform:maxSaturationShiftUniform program:filterProgram];
}

@end

Source: (StackOverflow)

Use ColorMatrix or HexColor in BlendModeFilter - Android?

Currently the BlendModes (Subtract, Exclusion etc) use the LauncherImage as the mask. Can I apply these BlendModes to a ColorMatrix?

I'm using the GPUImageLibrary

colorMatrix[
    0.393, 0.7689999, 0.18899999, 0, 0,
    0.349, 0.6859999, 0.16799999, 0, 0,
    0.272, 0.5339999, 0.13099999, 0, 0,
    0,     0,         0,          1, 0];

SubtractBlendFilter.java

public class GPUImageSubtractBlendFilter extends GPUImageTwoInputFilter {
public static final String SUBTRACT_BLEND_FRAGMENT_SHADER = "varying highp vec2 textureCoordinate;\n" +
        " varying highp vec2 textureCoordinate2;\n" +
        "\n" +
        " uniform sampler2D inputImageTexture;\n" +
        " uniform sampler2D inputImageTexture2;\n" +
        " \n" +
        " void main()\n" +
        " {\n" +
        "   lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);\n" +
        "   lowp vec4 textureColor2 = texture2D(inputImageTexture2, textureCoordinate2);\n" +
        "\n" +
        "   gl_FragColor = vec4(textureColor.rgb - textureColor2.rgb, textureColor.a);\n" +
        " }";

public GPUImageSubtractBlendFilter() {
    super(SUBTRACT_BLEND_FRAGMENT_SHADER);
}
}

GPUIMageTwoInputFilter.java

public class GPUImageTwoInputFilter extends GPUImageFilter {
private static final String VERTEX_SHADER = "attribute vec4 position;\n" +
        "attribute vec4 inputTextureCoordinate;\n" +
        "attribute vec4 inputTextureCoordinate2;\n" +
        " \n" +
        "varying vec2 textureCoordinate;\n" +
        "varying vec2 textureCoordinate2;\n" +
        " \n" +
        "void main()\n" +
        "{\n" +
        "    gl_Position = position;\n" +
        "    textureCoordinate = inputTextureCoordinate.xy;\n" +
        "    textureCoordinate2 = inputTextureCoordinate2.xy;\n" +
        "}";

public int mFilterSecondTextureCoordinateAttribute;
public int mFilterInputTextureUniform2;
public int mFilterSourceTexture2 = OpenGlUtils.NO_TEXTURE;
private ByteBuffer mTexture2CoordinatesBuffer;
private Bitmap mBitmap;

public GPUImageTwoInputFilter(String fragmentShader) {
    this(VERTEX_SHADER, fragmentShader);
}

public GPUImageTwoInputFilter(String vertexShader, String fragmentShader) {
    super(vertexShader, fragmentShader);
    setRotation(Rotation.NORMAL, false, false);
}

@Override
public void onInit() {
    super.onInit();

    mFilterSecondTextureCoordinateAttribute = GLES20.glGetAttribLocation(getProgram(), "inputTextureCoordinate2");
    mFilterInputTextureUniform2 = GLES20.glGetUniformLocation(getProgram(), "inputImageTexture2"); // This does assume a name of "inputImageTexture2" for second input texture in the fragment shader
    GLES20.glEnableVertexAttribArray(mFilterSecondTextureCoordinateAttribute);

    if (mBitmap != null&&!mBitmap.isRecycled()) {
        setBitmap(mBitmap);
    }
}

public void setBitmap(final Bitmap bitmap) {
    if (bitmap != null && bitmap.isRecycled()) {
        return;
    }
    mBitmap = bitmap;
    if (mBitmap == null) {
        return;
    }
    runOnDraw(new Runnable() {
        public void run() {
            if (mFilterSourceTexture2 == OpenGlUtils.NO_TEXTURE) {
                if (bitmap == null || bitmap.isRecycled()) {
                    return;
                }
                GLES20.glActiveTexture(GLES20.GL_TEXTURE3);
                mFilterSourceTexture2 = OpenGlUtils.loadTexture(bitmap, OpenGlUtils.NO_TEXTURE, false);
            }
        }
    });
}

public Bitmap getBitmap() {
    return mBitmap;
}

public void recycleBitmap() {
    if (mBitmap != null && !mBitmap.isRecycled()) {
        mBitmap.recycle();
        mBitmap = null;
    }
}

public void onDestroy() {
    super.onDestroy();
    GLES20.glDeleteTextures(1, new int[]{
            mFilterSourceTexture2
    }, 0);
    mFilterSourceTexture2 = OpenGlUtils.NO_TEXTURE;
}

@Override
protected void onDrawArraysPre() {
    GLES20.glEnableVertexAttribArray(mFilterSecondTextureCoordinateAttribute);
    GLES20.glActiveTexture(GLES20.GL_TEXTURE3);
    GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mFilterSourceTexture2);
    GLES20.glUniform1i(mFilterInputTextureUniform2, 3);

    mTexture2CoordinatesBuffer.position(0);
    GLES20.glVertexAttribPointer(mFilterSecondTextureCoordinateAttribute, 2, GLES20.GL_FLOAT, false, 0, mTexture2CoordinatesBuffer);
}

public void setRotation(final Rotation rotation, final boolean flipHorizontal, final boolean flipVertical) {
    float[] buffer = TextureRotationUtil.getRotation(rotation, flipHorizontal, flipVertical);

    ByteBuffer bBuffer = ByteBuffer.allocateDirect(32).order(ByteOrder.nativeOrder());
    FloatBuffer fBuffer = bBuffer.asFloatBuffer();
    fBuffer.put(buffer);
    fBuffer.flip();

    mTexture2CoordinatesBuffer = bBuffer;
}
}

My guess it involves changing something with String SUBTRACT_BLEND_GRAGMENT_SHADER & String VERTEX_SHADER .


Source: (StackOverflow)