CS 0007 CS 0007 Project 2
Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit
CS 0007 Project 2
Overview
In this assignment, you’ll use your coding skills that you’ve developed over the semester to write a variety of image effects. You may use any code from lecture, recitation, or the textbook as a reference, but this is an individual assignment. You may not share code with or copy code from any other person (whether they are in the class or not).
Requirements
You will modify the starter code, ImageLab .java, to implement several different image effects. You may also create as many helper methods as you’d like (these will not be graded, but may help you write cleaner code).
You will build your code as normal, using javac ImageLab .java. When you run it, you’ll pro- vide an input image filename.
For example, to run it on a file called ”cathy.jpg”, use java ImageLab cathy .jpg. The code will output a series of images (each corresponding to one effect).
Important: you do not have to do every task in this project to get 100%. See the grad- ing section for more information.
Important: Like in project one, there is a reference implementation. You should use it! On the command line, run java -jar Reference .jar inputimagefile .jpg .
Submission Instructions and Grading
YOU DO NOT NEED TO IMPLEMENT ALL
EIGHT EFFECTS TO GET FULL CREDIT
You will get 5 points if the code you submit compiles without errors.
Task |
Possible points |
Compiles without errors |
5 |
Threshold |
15 |
Color Rotation |
15 |
Quantize |
20 |
Warhol |
20 |
Sharpen |
25 |
Box Blur |
25 |
Simple Edge Detection |
25 |
Sobel Edge Detection |
30 |
The first four effects (Threshold, Color Rotation, Quantize, and Warhol) are relatively straighfor- ward to implement. The last four (Sharpen, Box Blur, Simple Edge Detect, and Sobel Edge Detect) are more complicated. The points are assigned so that if you only do the simple effects and make no attempt at any of the more complicated ones (and your code compiles!), you can still get a 75%. You can get 100% if you implement all of the simple effects plus one of the more complex ones. You can get more than 100% on this project if you successfully implement more. In order to keep the scoring fair, I’m capping the extra credit at 121 points according to the following formula:
score = min(raw score, 100) + ∗ max(raw score − 100, 0)
In English, this means that you will earn 1 point for every point up to 100%, then of every point over 100%. As always, partial credit will be awarded where applicable.
You will submit your code file, ImageLab .java on CourseWeb under the ”Projects” section. You may submit as many times as you want, but only your last submission will be graded.
To grade your submission, I’ll be running it on a series of test images. I’ve provided the images that I used in the examples in the ”examples” zipfile, though I may use others when grading. I’ll also be reading your code.
Strategy and Hints
This is a big assignment with lots of parts and lots of options. I recommend your strategy be something like this:
1. START EARLY.
2. NO REALLY, START EARLY. If you wait until the last week to start, you’re going to have a bad time.
3. Read through the background section and make sure you understand it. You can also refer to the lecture slides from 10/20.
4. Read through the grayscale() example and make sure you understand it. Copy/pasting the code from this and then modifying it is a good place to start for each of the first four effects.
5. Pick either Threshold or Color Rotation to do first. Get it working completely, then start on the other.
6. Once you’ve finished the first two, Quantize and Warhol shouldn’t be much harder.
7. Take a breath.
8. Read through the convolution background section. Make sure you understand it. You can always email Jeff or attend office hours (Jeff’s or the TAs’) to get help with this.
9. Consider implementing the generic convolve() method as described later in this handout. Especially if you’re hoping to get above 100%, this will probably save you time later.
10. Box Blur is probably the easiest of the convolutional operations. If you’re only doing one, you can try this one first.
Some hints and tricks:
1. Helper methods are your friends! Feel free to write as many as you want, especially for things like limiting values to between 0 and 255.
2. The grayscale() method takes in a BufferedImage and returns a grayscale version by set- ting the red, green, and blue values of each pixel to the intensity at that pixel. In other words, if you’re implementing an effect like Thresholding or Edge Detection which operates on the intensity, you can convert the input to grayscale first and then just look at the red (or green, or blue) value at each pixel to get the intensity.
3. I will award partial credit wherever I can. If you have something that mostly works, leave it in! If it’s preventing the rest of your code from compiling, just /* comment it out */. I’ll read through the commented code and give points if you’re on the right track.
4. Try your code on your own images! If you find something that looks especially cool, send it my way.
Background: Java Image Manipulation
1 BufferedImage
Java provides the BufferedImage class to manipulate images. I have provided two methods, readImage() and writeImage(), which simplify the usage of BufferedImages for the purposes of this project. Aside from those, you’ll be using BufferedImages like any other Java programmer would.
1.1 Opening and Creating Images
To open an image file as a BufferedImage, you can simply say
BufferedImage my image = readImage("image filename .jpg");
The starter code handles loading and saving images, so you shouldn’t have to use this method directly.
To create a blank BufferedImage,
BufferedImage empty image = new BufferedImage(width, height, BufferedImage .TYPE INT RGB); In this example, width and height are the width and height of the new image (as ints). The third argument, BufferedImage .TYPE INT RGB, tells Java that we want to specify colors using their red, green, and blue values. You will not need to change this for any of the effects.
1.2 Saving Images
I have provided the writeImage() method. If img is an image, you can save it as a jpg file by using writeImage(img, "filename .jpg"). The starter code handles loading and saving images, so you shouldn’t have to use this method directly.
2 Pixels and Colors
As demonstrated in class, you can think of an image as a 2D array (grid) of pixels. Each pixel is a single color, composed of its component red, green, and blue values. In class we represented those as a third dimension on the array. Java instead represents each pixel as a Color object (imported from the java .awt .Color package). You don’t need to know much about Colors aside from these basics:
• If img is a BufferedImage object, you can get the value of a pixel at a position (x, y) by saying Color my color = new Color(img .getRGB(x, y));
– Be sure to check that (x, y) is a valid pixel! If x is not between 0 and the image width (img .getWidth()), or if y is not between 0 and the image height (img .getHeight()), you’ll either get an exception or weird results.
• Once you have a Color object, you can extract the red, green, and blue components as integers using
– int red = my color .getRed();
– int green = my color .getGreen();
– int blue = my color .getBlue();
• To create a new Color object from a given red, green, and blue value, use Color my new color = new Color(red, green, blue);.
– Important: Your values for red, green, and blue must be integers in the range 0 -
255. If you provide a value that is negative, or that’s greater than 255, Java will cause an IllegalArgumentException. Be sure to check your values before creating a new Color.
• If my color is a Color object and img is a BufferedImage object, you can set the color at position (x, y) using
img .setRGB(x, y, my color .getRGB());
[0 pts] Converting an Image to Grayscale
I have provided an example of a very basic image effect, conversion to grayscale. This demon- strates basic image operations.
1 public static BufferedImage imageToGrayscale(BufferedImage input) { 2 // store the image width and height in variables for later use
3 int img _width = input . getWidth();
4 int img _height = input . getHeight();
5 // create a new BufferedImage with the same width and height as the input
6 // image . TYPE _ INT _ RGB means we want to specify pixel values using their
7 // red , green , and blue components .
8 BufferedImage output _ img = new BufferedImage(
9 img _width , img _height , BufferedImage . TYPE _ INT _ RGB);
10
11 // Loop over all pixels
12 for ( int x = 0; x < img _width; x++) {
13 for ( int y = 0; y < img _height; y++) {
14 // Get the color of the input image as a java . awt . Color object
15 Color color _ at _pos = new Color(input . getRGB(x, y));
16 // Call the getRed () , getGreen () , and getBlue () methods on the color
17 // object to store the red , green , and blue values into separate integer
18 // variables .
19 int red = color _ at _pos . getRed();
20 int green = color _ at _pos . getGreen();
21 int blue = color _ at _pos . getBlue();
22 // to get the grayscale version , we just average the red , green , and
23 // blue channels , then use that average as the red , green , and blue
24 // values in the output image .
25 int average = (red + green + blue) / 3; 26
27 // You ’ll get an IllegalArgumentException if you try to specify red ,
28 // green , or blue values greater than 255 or less than 0. While it ’s
29 // mathematically impossible to average three values in that range and
30 // get a value outside the range , the calculations in the other tasks
31 // will sometimes produce values outside the range . This code " clamps "
32 // the value to the range 0 - 255
33 if (average < 0) {
34 average = 0;
35 } else if (average > 255) {
36 average = 255;
37 }
38 // make a new Color object , which has the average intensity ((r+g+b)/3)
39 // for each of its channels .
40 Color average _ color = new Color(average , average , average); 41
42 // in the output image , set the color at position (x , y) to our
43 // calculated average color
44 output _ img . setRGB(x, y, average _ color . getRGB());
45 }
46 }
47 // return the output image .
48 return output _ img; 49 }
[15pts] Effect 1: Thresholding
(a) Original image. (b) Thresholded (level 128).
Figure 1: An image, before and after thresholding
.
Thresholding is perhaps the most basic image operation: for each pixel, consider its intensity (for our purposes, intensity will be the average of the red, green, and blue channels). If the intensity is greater than a given threshold level, set the corresponding pixel in the output image to white. If the intensity is less than the given level, set the corresponding pixel in the output image to black. Your code will probably be something like
1. Create a new BufferedImage that’s the same size as the input image. This will be the output image.
2. Loop over each pixel at (x, y)
(a) Extract the red, green, and blue components of the pixel at (x, y)
(b) Average the red, green, and blue components ((r+g+b)/3)
(c) If the computed average is greater than the level, set the output image to white at location (x, y). White is the color with 255 as its red, green, and blue values.
(d) If the computed average is not greater than the level, set the output image to black at location (x, y). Black is the color with 0 as its red, green, and blue values.
3. Return the output image
[15pts] Effect 2: Color Rotation
(a) Original image (b) Color Rotated
Figure 2: An image, before and after color rotation
.
Color rotation is a simple effect that can produce some cool results. It works by breaking the image down into its red, green, and blue channels, then by creating a new image with the values swapped around. For each pixel in the output, the red value will be the blue value from the input, the green value will be the red value from the input, and the blue value will be the green value from the input.
Your code will probably be something like
1. Create a new BufferedImage that’s the same size as the input image. This will be the output image.
2. Loop over each pixel at (x, y)
(a) Extract the red, green, and blue components of the pixel at (x, y)
(b) Create a new color. Its red value should be the blue value from the input, its green value should be the red value from the input, and its blue value should be the green value from the input.
(c) Set the output image at position (x, y) to the created color
3. Return the output image
[20pts] Effect 3: Quantization
(a) Original image (b) Quantized (4 bits per channel)
Figure 3: An image, before and after quantization
.
Each pixel in an image has 256 possible values for its red, green, and blue values (0 - 255, inclusive). This gives a total of 256 ∗ 256 ∗ 256 = 16777216 possible colors. Quantization reduces the number of possible values. In the example above, there are 4 possible values for each channel, for a total of 4 ∗ 4 ∗ 4 = 64 possible colors.
Your code will probably be something like
1. Create a new BufferedImage that’s the same size as the input image. This will be the output image.
2. Loop over each pixel at (x, y)
(a) Extract the red, green, and blue components of the pixel at (x, y)
(b) Use integer division to divide each component by 64, then multiply those results by 85.
(c) Create a new Color using the new (quantized) red, green, and blue values. Set the output image at position (x, y) to the created color.
3. Return the output image
This works by taking advantage of the ”drop the remainder” effect of integer division. Each color starts out as a number between 0 and 255. When you divide by 64, any number between 0 and 63 comes out to 0, any number between 64 and 127 come out to 1, any number between 128 and
191 comes out to 2, and anything between 192 and 255 come out to 3. We then multiply by 85 to scale these values up into colors bright enough to see: 0 stays as 0, 1 becomes 85, 2 becomes 170, and 3 becomes 255.
[20pts] Effect 4: Low-budget ”Warhol effect”
(a) Original image
(b) ”Warhol”-ized
Figure 4: An image, before and after effect
.
Andy Warhol, is an artist famous for his role in the Pop-Art movement in the 1950s and 1960s. He was born in Pittsburgh and attended CMU (then Carnegie Tech) briefly before moving to New York City. One of his most iconic effects is splitting an image into multiple, partially-colored copies (search for the Warhol Marilyn series). We can implement an easier (though less impressive) version of this effect.
1. Create a new BufferedImage that’s three times the width and the same height as the input image. This will be the output image.
2. Loop over each pixel (x, y) in the image.
(a) Extract the red, green, and blue components of the pixel at (x, y)
(b) Create three new colors. If the original pixel has red, green, and blue values (r, g, b)
,
• The first color should be (r, (g+b)/2, (g+b)/2) .
• The second color should be ((r+b)/2, g, (r+b)/2) .
• The third color should be ((r+g)/2, (r+g)/2, b) .
(c) Set the pixel (x, y) in the output image to the first color.
(d) Set the pixel (x+W, y) in the output image to the second color (where W is the width of the input image).
(e) Set the pixel (x+2*W, y) in the output image to the third color (where W is the width of the input image).
3. Return the output image
Background: Image Convolution
The remaining four (well, three and a half) effects are ”convolution” effects. This is a fancy word with a specific mathematic meaning that has to do with matrix multiplication and signal processing. For our purposes, it means that each pixel in the output image is computed based not only on the pixel in the same position in the input, but also on that pixel’s neighbors. The easiest way to do all four is to write a generic convolve() method and use it to implement all of the remaining effects, though that’s certainly not necessary to get full credit.
Image convolution is a process performed by computing the value for the output pixel based not just on the pixel at the corresponding spot in the input, but based on that pixel and all eight of its neighbors. The ”weight”that each of those nine pixels should have in the output pixel is given by a 3x3 matrix of numbers (if you’re reading about this elsewhere, this matrix is sometimes referred to as the ”kernel”). For example, consider the following 5x5 image (where each letter represents a pixel) and matrix (this happens to be the matrix for edge detection):
A |
B |
C |
D |
E |
F |
G |
H |
I |
J |
K |
L |
M |
N |
O |
P |
Q |
R |
S |
T |
U |
V |
W |
X |
Y |
−1
4
−1
0
−1
0 )
2023-03-24