Sunday, August 21, 2016

Read csv file with header

import csv  f = open('myclone.csv', 'rb')  reader = csv.reader(f)
headers = reader.next()
column = {}
for h in headers:
    column[h] = []
for row in reader:
     for h, v in zip(headers, row):
         column[h].append(v)

The Image Module

The Image module provides a class with the same name which is used to represent a PIL image. The module also provides a number of factory functions, including functions to load images from files, and to create new images.

Examples #

The following script loads an image, rotates it 45 degrees, and displays it using an external viewer (usually xv on Unix, and the paint program on Windows).
Open, rotate, and display an image (using the default viewer)
from PIL import Image
im = Image.open("bride.jpg")
im.rotate(45).show()
The following script creates nice 128x128 thumbnails of all JPEG images in the current directory.
Create thumbnails
from PIL import Image
import glob, os

size = 128, 128

for infile in glob.glob("*.jpg"):
    file, ext = os.path.splitext(infile)
    im = Image.open(infile)
    im.thumbnail(size, Image.ANTIALIAS)
    im.save(file + ".thumbnail", "JPEG")

Functions #

new #

Image.new(mode, size) ⇒ image
Image.new(mode, size, color) ⇒ image
Creates a new image with the given mode and size. Size is given as a (width, height)-tuple, in pixels. The color is given as a single value for single-band images, and a tuple for multi-band images (with one value for each band). In 1.1.4 and later, you can also use color names (see the ImageColor module documentation for details) If the color argument is omitted, the image is filled with zero (this usually corresponds to black). If the color is None, the image is not initialised. This can be useful if you’re going to paste or draw things in the image.
from PIL import Image
im = Image.new("RGB", (512, 512), "white")

open #

Image.open(file) ⇒ image
Image.open(file, mode) ⇒ image
Opens and identifies the given image file. This is a lazy operation; the function reads the file header, but the actual image data is not read from the file until you try to process the data (call the load method to force loading). If the mode argument is given, it must be “r”.
You can use either a string (representing the filename) or a file object as the file argument. In the latter case, the file object must implement read, seek, and tell methods, and be opened in binary mode.
from PIL import Image
im = Image.open("lenna.jpg")
from PIL import image
from StringIO import StringIO

# read data from string
im = Image.open(StringIO(data))

blend #

Image.blend(image1, image2, alpha) ⇒ image
Creates a new image by interpolating between the given images, using a constant alpha. Both images must have the same size and mode.
    out = image1 * (1.0 - alpha) + image2 * alpha
If the alpha is 0.0, a copy of the first image is returned. If the alpha is 1.0, a copy of the second image is returned. There are no restrictions on the alpha value. If necessary, the result is clipped to fit into the allowed output range.

composite #

Image.composite(image1, image2, mask) ⇒ image
Creates a new image by interpolating between the given images, using the corresponding pixels from a mask image as alpha. The mask can have mode “1”, “L”, or “RGBA”. All images must be the same size.

eval #

Image.eval(image, function) ⇒ image
Applies the function (which should take one argument) to each pixel in the given image. If the image has more than one band, the same function is applied to each band. Note that the function is evaluated once for each possible pixel value, so you cannot use random components or other generators.

frombuffer #

Image.frombuffer(mode, size, data) ⇒ image
(New in PIL 1.1.4). Creates an image memory from pixel data in a string or buffer object, using the standard “raw” decoder. For some modes, the image memory will share memory with the original buffer (this means that changes to the original buffer object are reflected in the image). Not all modes can share memory; supported modes include “L”, “RGBX”, “RGBA”, and “CMYK”. For other modes, this function behaves like a corresponding call to the fromstring function.
Note: In versions up to and including 1.1.6, the default orientation differs from that of fromstring. This may be changed in future versions, so for maximum portability, it’s recommended that you spell out all arguments when using the “raw” decoder:
im = Image.frombuffer(mode, size, data, "raw", mode, 0, 1)
Image.frombuffer(mode, size, data, decoder, parameters) ⇒ image
Same as the corresponding fromstring call.

fromstring #

Image.fromstring(mode, size, data) ⇒ image
Creates an image memory from pixel data in a string, using the standard “raw” decoder.
Image.fromstring(mode, size, data, decoder, parameters) ⇒ image
Same, but allows you to use any pixel decoder supported by PIL. For more information on available decoders, see the section Writing Your Own File Decoder.
Note that this function decodes pixel data only, not entire images. If you have an entire image file in a string, wrap it in a StringIO object, and use open to load it.

merge #

Image.merge(mode, bands) ⇒ image
Creates a new image from a number of single band images. The bands are given as a tuple or list of images, one for each band described by the mode. All bands must have the same size.

Methods #

An instance of the Image class has the following methods. Unless otherwise stated, all methods return a new instance of the Image class, holding the resulting image.

convert #

im.convert(mode) ⇒ image
Converts an image to another mode, and returns the new image.
When converting from a palette image, this translates pixels through the palette. If mode is omitted, a mode is chosen so that all information in the image and the palette can be represented without a palette.
When converting from a colour image to black and white, the library uses the ITU-R 601-2 luma transform:
    L = R * 299/1000 + G * 587/1000 + B * 114/1000
When converting to a bilevel image (mode “1”), the source image is first converted to black and white. Resulting values larger than 127 are then set to white, and the image is dithered. To use other thresholds, use the point method. To disable dithering, use the dither= option (see below).
im.convert(“P”, **options) ⇒ image
Same, but provides better control when converting an “RGB” image to an 8-bit palette image. Available options are:
dither=. Controls dithering. The default is FLOYDSTEINBERG, which distributes errors to neighboring pixels. To disable dithering, use NONE.
palette=. Controls palette generation. The default is WEB, which is the standard 216-color “web palette”. To use an optimized palette, use ADAPTIVE.
colors=. Controls the number of colors used for the palette when palette is ADAPTIVE. Defaults to the maximum value, 256 colors.
im.convert(mode, matrix) ⇒ image
Converts an “RGB” image to “L” or “RGB” using a conversion matrix. The matrix is a 4- or 16-tuple.
The following example converts an RGB image (linearly calibrated according to ITU-R 709, using the D65 luminant) to the CIE XYZ colour space:
Convert RGB to XYZ
rgb2xyz = (
    0.412453, 0.357580, 0.180423, 0,
    0.212671, 0.715160, 0.072169, 0,
    0.019334, 0.119193, 0.950227, 0 )
out = im.convert("RGB", rgb2xyz)

copy #

im.copy() ⇒ image
Copies the image. Use this method if you wish to paste things into an image, but still retain the original.

crop #

im.crop(box) ⇒ image
Returns a copy of a rectangular region from the current image. The box is a 4-tuple defining the left, upper, right, and lower pixel coordinate.
This is a lazy operation. Changes to the source image may or may not be reflected in the cropped image. To get a separate copy, call the load method on the cropped copy.

draft #

im.draft(mode, size)
Configures the image file loader so it returns a version of the image that as closely as possible matches the given mode and size. For example, you can use this method to convert a colour JPEG to greyscale while loading it, or to extract a 128x192 version from a PCD file.
Note that this method modifies the Image object in place (to be precise, it reconfigures the file reader). If the image has already been loaded, this method has no effect.

filter #

im.filter(filter) ⇒ image
Returns a copy of an image filtered by the given filter. For a list of available filters, see the ImageFilter module.

fromstring #

im.fromstring(data)
im.fromstring(data, decoder, parameters)
Same as the fromstring function, but loads data into the current image.

getbands #

im.getbands() ⇒ tuple of strings
Returns a tuple containing the name of each band. For example, getbands on an RGB image returns (“R”, “G”, “B”).

getbbox #

im.getbbox() ⇒ 4-tuple or None
Calculates the bounding box of the non-zero regions in the image. The bounding box is returned as a 4-tuple defining the left, upper, right, and lower pixel coordinate. If the image is completely empty, this method returns None.

getcolors #

im.getcolors() ⇒ a list of (count, color) tuples or None
im.getcolors(maxcolors) ⇒ a list of (count, color) tuples or None
(New in 1.1.5) Returns an unsorted list of (count, color) tuples, where count is the number of times the corresponding color occurs in the image.
If the maxcolors value is exceeded, the method stops counting and returns None. The default maxcolors value is 256. To make sure you get all colors in an image, you can pass in size[0]*size[1] (but make sure you have lots of memory before you do that on huge images).

getdata #

im.getdata() ⇒ sequence
Returns the contents of an image as a sequence object containing pixel values. The sequence object is flattened, so that values for line one follow directly after the values of line zero, and so on.
Note that the sequence object returned by this method is an internal PIL data type, which only supports certain sequence operations, including iteration and basic sequence access. To convert it to an ordinary sequence (e.g. for printing), use list(im.getdata()).

getextrema #

im.getextrema() ⇒ 2-tuple
Returns a 2-tuple containing the minimum and maximum values of the image. In the current version of PIL, this only works for single-band images.

getpixel #

im.getpixel(xy) ⇒ value or tuple
Returns the pixel at the given position. If the image is a multi-layer image, this method returns a tuple.
Note that this method is rather slow; if you need to process larger parts of an image from Python, you can either use pixel access objects (see load), or the getdata method.

histogram #

im.histogram() ⇒ list
Returns a histogram for the image. The histogram is returned as a list of pixel counts, one for each pixel value in the source image. If the image has more than one band, the histograms for all bands are concatenated (for example, the histogram for an “RGB” image contains 768 values).
A bilevel image (mode “1”) is treated as a greyscale (“L”) image by this method.
im.histogram(mask) ⇒ list
Returns a histogram for those parts of the image where the mask image is non-zero. The mask image must have the same size as the image, and be either a bi-level image (mode “1”) or a greyscale image (“L”).

load #

im.load()
Allocates storage for the image and loads it from the file (or from the source, for lazy operations). In normal cases, you don’t need to call this method, since the Image class automatically loads an opened image when it is accessed for the first time.
(New in 1.1.6) In 1.1.6 and later, load returns a pixel access object that can be used to read and modify pixels. The access object behaves like a 2-dimensional array, so you can do:
pix = im.load()
print pix[x, y]
pix[x, y] = value
Access via this object is a lot faster than getpixel and putpixel.

offset #

im.offset(xoffset, yoffset) ⇒ image
(Deprecated) Returns a copy of the image where the data has been offset by the given distances. Data wraps around the edges. If yoffset is omitted, it is assumed to be equal to xoffset.
This method is deprecated, and has been removed in PIL 1.2. New code should use the offset function in the ImageChops module.

paste #

im.paste(image, box)
Pastes another image into this image. The box argument is either a 2-tuple giving the upper left corner, a 4-tuple defining the left, upper, right, and lower pixel coordinate, or None (same as (0, 0)). If a 4-tuple is given, the size of the pasted image must match the size of the region.
If the modes don’t match, the pasted image is converted to the mode of this image (see the convert method for details).
im.paste(colour, box)
Same as above, but fills the region with a single colour. The colour is given as a single numerical value for single-band images, and a tuple for multi-band images.
im.paste(image, box, mask)
Same as above, but updates only the regions indicated by the mask. You can use either “1”, “L” or “RGBA” images (in the latter case, the alpha band is used as mask). Where the mask is 255, the given image is copied as is. Where the mask is 0, the current value is preserved. Intermediate values can be used for transparency effects.
Note that if you paste an “RGBA” image, the alpha band is ignored. You can work around this by using the same image as both source image and mask.
im.paste(colour, box, mask)
Same as above, but fills the region indicated by the mask with a single colour.

point #

im.point(table) ⇒ image
im.point(function) ⇒ image
Returns a copy of the image where each pixel has been mapped through the given lookup table. The table should contains 256 values per band in the image. If a function is used instead, it should take a single argument. The function is called once for each possible pixel value, and the resulting table is applied to all bands of the image.
If the image has mode “I” (integer) or “F” (floating point), you must use a function, and the function must have the following format:
    argument * scale + offset
Example:
    out = im.point(lambda i: i * 1.2 + 10)
You can leave out either the scale or the offset.
im.point(table, mode) ⇒ image
im.point(function, mode) ⇒ image
Same as above, but specifies a new mode for the output image. This can be used to convert “L” and “P” images to “1” in one step, e.g. to threshold an image.
(New in 1.1.5) This form can also be used to convert “L” images to “I” or “F”, and to convert “I” images with 16-bit data to “L”. In the second case, you must use a 65536-item lookup table.

putalpha #

im.putalpha(band)
Copies the given band to the alpha layer of the current image.
The image must be an “RGBA” image, and the band must be either “L” or “1”.
(New in PIL 1.1.5) You can use putalpha on other modes as well; the image is converted in place, to a mode that matches the current mode but has an alpha layer (this usually means “LA” or “RGBA”). Also, the band argument can be either an image, or a colour value (an integer).

putdata #

im.putdata(data)
im.putdata(data, scale, offset)
Copy pixel values from a sequence object into the image, starting at the upper left corner (0, 0). The scale and offset values are used to adjust the sequence values:
    pixel = value * scale + offset
If the scale is omitted, it defaults to 1.0. If the offset is omitted, it defaults to 0.0.

putpalette #

im.putpalette(sequence)
Attach a palette to a “P” or “L” image. For an “L” image, the mode is changed to “P”. The palette sequence should contain 768 integer values, where each group of three values represent the red, green, and blue values for the corresponding pixel index. Instead of an integer sequence, you can use a 768-byte string.

putpixel #

im.putpixel(xy, colour)
Modifies the pixel at the given position. The colour is given as a single numerical value for single-band images, and a tuple for multi-band images.
Note that this method is relatively slow. If you’re using 1.1.6, pixel access objects (see load) provide a faster way to modify the image. If you want to generate an entire image, it can be more efficient to create a Python list and use putdata to copy it to the image. For more extensive changes, use paste or the ImageDraw module instead.
You can speed putpixel up a bit by “inlining” the call to the internal putpixel implementation method:
    im.load()
    putpixel = im.im.putpixel
    for i in range(n):
       ...
       putpixel((x, y), value)
In 1.1.6, the above is better written as:
    pix = im.load()
    for i in range(n):
        ...
        pix[x, y] = value

quantize #

im.quantize(colors, **options) ⇒ image
(Deprecated) Converts an “L” or “RGB” image to a “P” image with the given number of colors, and returns the new image.
For new code, use convert with a adaptive palette instead:
out = im.convert("P", palette=Image.ADAPTIVE, colors=256)

resize #

im.resize(size) ⇒ image
im.resize(size, filter) ⇒ image
Returns a resized copy of an image. The size argument gives the requested size in pixels, as a 2-tuple: (width, height).
The filter argument can be one of NEAREST (use nearest neighbour), BILINEAR (linear interpolation in a 2x2 environment), BICUBIC (cubic spline interpolation in a 4x4 environment), or ANTIALIAS (a high-quality downsampling filter). If omitted, or if the image has mode “1” or “P”, it is set to NEAREST.
Note that the bilinear and bicubic filters in the current version of PIL are not well-suited for large downsampling ratios (e.g. when creating thumbnails). You should use ANTIALIAS unless speed is much more important than quality.

rotate #

im.rotate(angle) ⇒ image
im.rotate(angle, filter=NEAREST, expand=0) ⇒ image
Returns a copy of an image rotated the given number of degrees counter clockwise around its centre.
The filter argument can be one of NEAREST (use nearest neighbour), BILINEAR (linear interpolation in a 2x2 environment), or BICUBIC (cubic spline interpolation in a 4x4 environment). If omitted, or if the image has mode “1” or “P”, it is set to NEAREST.
The expand argument, if true, indicates that the output image should be made large enough to hold the rotated image. If omitted or false, the output image has the same size as the input image.

save #

im.save(outfile, options…)
im.save(outfile, format, options…)
Saves the image under the given filename. If format is omitted, the format is determined from the filename extension, if possible. This method returns None.
Keyword options can be used to provide additional instructions to the writer. If a writer doesn’t recognise an option, it is silently ignored. The available options are described later in this handbook.
You can use a file object instead of a filename. In this case, you must always specify the format. The file object must implement the seek, tell, and write methods, and be opened in binary mode.
If the save fails, for some reason, the method will raise an exception (usually an IOError exception). If this happens, the method may have created the file, and may have written data to it. It’s up to your application to remove incomplete files, if necessary.

seek #

im.seek(frame)
Seeks to the given frame in a sequence file. If you seek beyond the end of the sequence, the method raises an EOFError exception. When a sequence file is opened, the library automatically seeks to frame 0.
Note that in the current version of the library, most sequence formats only allows you to seek to the next frame.

show #

im.show()
Displays an image. This method is mainly intended for debugging purposes.
On Unix platforms, this method saves the image to a temporary PPM file, and calls the xv utility.
On Windows, it saves the image to a temporary BMP file, and uses the standard BMP display utility to show it.
This method returns None.

split #

im.split() ⇒ sequence
Returns a tuple of individual image bands from an image. For example, splitting an “RGB” image creates three new images each containing a copy of one of the original bands (red, green, blue).

tell #

im.tell() ⇒ integer
Returns the current frame number.

thumbnail #

im.thumbnail(size)
im.thumbnail(size, filter)
Modifies the image to contain a thumbnail version of itself, no larger than the given size. This method calculates an appropriate thumbnail size to preserve the aspect of the image, calls the draft method to configure the file reader (where applicable), and finally resizes the image.
The filter argument can be one of NEAREST, BILINEAR, BICUBIC, or ANTIALIAS (best quality). If omitted, it defaults to NEAREST.
Note that the bilinear and bicubic filters in the current version of PIL are not well-suited for thumbnail generation. You should use ANTIALIAS unless speed is much more important than quality.
Also note that this function modifies the Image object in place. If you need to use the full resolution image as well, apply this method to a copy of the original image. This method returns None.

tobitmap #

im.tobitmap() ⇒ string
Returns the image converted to an X11 bitmap.

tostring #

im.tostring() ⇒ string
Returns a string containing pixel data, using the standard “raw” encoder.
im.tostring(encoder, parameters) ⇒ string
Returns a string containing pixel data, using the given data encoding.
Note: The tostring method only fetches the raw pixel data. To save the image to a string in a standard file format, pass a StringIO object (or equivalent) to the save method.

transform #

im.transform(size, method, data) ⇒ image
im.transform(size, method, data, filter) ⇒ image
Creates a new image with the given size, and the same mode as the original, and copies data to the new image using the given transform.
In the current version of PIL, the method argument can be EXTENT (cut out a rectangular subregion), AFFINE (affine transform), QUAD (map a quadrilateral to a rectangle), MESH (map a number of source quadrilaterals in one operation), or PERSPECTIVE. The various methods are described below.
The filter argument defines how to filter pixels from the source image. In the current version, it can be NEAREST (use nearest neighbour), BILINEAR (linear interpolation in a 2x2 environment), or BICUBIC (cubic spline interpolation in a 4x4 environment). If omitted, or if the image has mode “1” or “P”, it is set to NEAREST.
im.transform(size, EXTENT, data) ⇒ image
im.transform(size, EXTENT, data, filter) ⇒ image
Extracts a subregion from the image.
Data is a 4-tuple (x0, y0, x1, y1) which specifies two points in the input image’s coordinate system. The resulting image will contain data sampled from between these two points, such that (x0, y0) in the input image will end up at (0,0) in the output image, and (x1, y1) at size.
This method can be used to crop, stretch, shrink, or mirror an arbitrary rectangle in the current image. It is slightly slower than crop, but about as fast as a corresponding resize operation.
im.transform(size, AFFINE, data) ⇒ image
im.transform(size, AFFINE, data, filter) ⇒ image
Applies an affine transform to the image, and places the result in a new image with the given size.
Data is a 6-tuple (a, b, c, d, e, f) which contain the first two rows from an affine transform matrix. For each pixel (x, y) in the output image, the new value is taken from a position (a x + b y + c, d x + e y + f) in the input image, rounded to nearest pixel.
This function can be used to scale, translate, rotate, and shear the original image.
im.transform(size, QUAD, data) ⇒ image
im.transform(size, QUAD, data, filter) ⇒ image
Maps a quadrilateral (a region defined by four corners) from the image to a rectangle with the given size.
Data is an 8-tuple (x0, y0, x1, y1, x2, y2, y3, y3) which contain the upper left, lower left, lower right, and upper right corner of the source quadrilateral.
im.transform(size, MESH, data) image ⇒ image
im.transform(size, MESH, data, filter) image ⇒ image
Similar to QUAD, but data is a list of target rectangles and corresponding source quadrilaterals.
im.transform(size, PERSPECTIVE, data) image ⇒ image
im.transform(size, PERSPECTIVE, data, filter) image ⇒ image
Applies a perspective transform to the image, and places the result in a new image with the given size.
Data is a 8-tuple (a, b, c, d, e, f, g, h) which contains the coefficients for a perspective transform. For each pixel (x, y) in the output image, the new value is taken from a position (a x + b y + c)/(g x + h y + 1), (d x + e y + f)/(g x + h y + 1) in the input image, rounded to nearest pixel.
This function can be used to change the 2D perspective of the original image.

transpose #

im.transpose(method) ⇒ image
Returns a flipped or rotated copy of an image.
Method can be one of the following: FLIP_LEFT_RIGHT, FLIP_TOP_BOTTOM, ROTATE_90, ROTATE_180, or ROTATE_270.

verify #

im.verify()
Attempts to determine if the file is broken, without actually decoding the image data. If this method finds any problems, it raises suitable exceptions. This method only works on a newly opened image; if the image has already been loaded, the result is undefined. Also, if you need to load the image after using this method, you must reopen the image file.
Note that this method doesn’t catch all possible errors; to catch decoding errors, you may have to load the entire image as well.

Attributes #

Instances of the Image class have the following attributes:

format #

im.format ⇒ string or None
The file format of the source file. For images created by the library itself (via a factory function, or by running a method on an existing image), this attribute is set to None.

mode #

im.mode ⇒ string
Image mode. This is a string specifying the pixel format used by the image. Typical values are “1”, “L”, “RGB”, or “CMYK.” See Concepts for a full list.

size #

im.size ⇒ (width, height)
Image size, in pixels. The size is given as a 2-tuple (width, height).

palette #

im.palette ⇒ palette or None
Colour palette table, if any. If mode is “P”, this should be an instance of the ImagePalette class. Otherwise, it should be set to None.

info #

im.info ⇒ dictionary
A dictionary holding data associated with the image. This dictionary is used by file handlers to pass on various non-image information read from the file. See documentation for the various file handlers for details.
Most methods ignore the dictionary when returning new images; since the keys are not standardized, it’s not possible for a method to know if the operation affects the dictionary. If you need the information later on, keep a reference to the info dictionary returned from the open method.

Read text file in Python

1)  
f = open( "rockyou.txt", "r" ) 
for line in f:
    print line
 
2)
 
with open("rockyou.txt", 'r') as f:
    for line in f:
        print line

Validate email with regex

    public static Boolean validateEmail(String email) {
        String EMAIL_PATTERN = "^[_A-Za-z0-9-\\+]+(\\.[_A-Za-z0-9-]+)*@"
                + "[A-Za-z0-9-]+(\\.[A-Za-z0-9]+)*(\\.[A-Za-z]{2,})$";

        Pattern pattern = Pattern.compile(EMAIL_PATTERN);
        Matcher matcher = pattern.matcher(email);

        return matcher.matches();
    }

Convert InputStream to JSONObject

/**
     * JSON object input stream.
     *
     * @param in
     *            the in
     * @return the JSON object
     * @throws IOException
     *             Signals that an I/O exception has occurred.
     */
    public static JSONObject jsonObjectInputStream(InputStream in)
            throws IOException {
        String line;
        BufferedReader br = new BufferedReader(new InputStreamReader(in,
                Charset.forName("UTF-8")));
        JSONObject json = new JSONObject();
        while ((line = br.readLine()) != null) {
            try {
                json = new JSONObject(line);
            } catch (JSONException e) {
                return null;
            }
        }
        return json;
    }

Send sms via twilio api

String login = "ACCOUNT_SID" + ":" + "AUTH_TOKEN";
        String base64login = new String(Base64.encodeBase64(login.getBytes()));
        String phoneNumber = "+14222456789";
        String messageText = "Hello, World!";
        try {
            Response response = Jsoup
                    .connect(
                            "https://api.twilio.com/2010-04-01/Accounts/{ACCOUNT_SID}/Messages.json")
                    .header("Authorization", "Basic " + base64login)
                    .timeout(10000).method(Method.POST).data("To", phoneNumber)
                    .data("From", "+14211456789").data("Body", messageText)
                    .execute();
            if (response.statusCode() == 201) {
                System.out.println("send message");
            } else {
                System.out.println("Message Send Failure");
            }
        } catch (IOException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }

Calculate age in Java

public static int getAge(long dateOfBirth) {
        Calendar today = Calendar.getInstance();
        Calendar birthDate = Calendar.getInstance();

        int age = 0;

        birthDate.setTimeInMillis(dateOfBirth);
        if (birthDate.after(today)) {
            return -1;
        }

        age = today.get(Calendar.YEAR) - birthDate.get(Calendar.YEAR);

        // If birth date is greater than todays date (after 2 days adjustment of
        // leap year) then decrement age one year
        if ((birthDate.get(Calendar.DAY_OF_YEAR)
                - today.get(Calendar.DAY_OF_YEAR) > 3)
                || (birthDate.get(Calendar.MONTH) > today.get(Calendar.MONTH))) {
            age--;

            // If birth date and todays date are of same month and birth day of
            // month is greater than todays day of month then decrement age
        } else if ((birthDate.get(Calendar.MONTH) == today.get(Calendar.MONTH))
                && (birthDate.get(Calendar.DAY_OF_MONTH) > today
                        .get(Calendar.DAY_OF_MONTH))) {
            age--;
        }

        return age;
    }

Convert List String to bytes and bytes to List String

    /**
     * Convert list string to bytes array.
     *
     * @param listString
     *            the list string
     * @return the byte[]
     * @throws IOException
     *             Signals that an I/O exception has occurred.
     */
    public static byte[] convertListStringToBytesArray(List<String> listString)
            throws IOException {
        ByteArrayOutputStream baos = new ByteArrayOutputStream();
        DeflaterOutputStream out = new DeflaterOutputStream(baos);
        DataOutputStream dos = new DataOutputStream(out);
        for (String i : listString) {
            dos.writeUTF(i);
            ;
        }
        dos.close();
        out.close();
        baos.close();
        return baos.toByteArray();
    }

    /**
     * Convert bytes array to list string.
     *
     * @param byteArrays
     *            the byte arrays
     * @return the list
     * @throws IOException
     *             Signals that an I/O exception has occurred.
     */
    public static List<String> convertBytesArrayToListString(byte[] byteArrays)
            throws IOException {
        ByteArrayInputStream bais = new ByteArrayInputStream(byteArrays);
        InflaterInputStream iis = new InflaterInputStream(bais);
        DataInputStream dins = new DataInputStream(iis);
        List<String> listRs = new ArrayList<String>();
        while (true) {
            try {
                String value = dins.readUTF();
                listRs.add(value);
            } catch (EOFException e) {
                break;
            }
        }
        dins.close();
        iis.close();
        bais.close();
        return listRs;
    }

Saturday, August 20, 2016

Getting a name for someone to connect back to your server

When doing test automation it is often the case you need to know the name of the current machine in order to prompt another machine to connect to it, particularly if you are running your tests in parallel. This week I was trying to get the server under test to connect back to a WireMock server running on the slave test machine.
The standard response on stack overflow is to use the following pattern to get a network address. In my version here if we can’t resolve the name then we are assuming we are running on a developers laptop on VPN so all the tests are run on the same machine. (Hence localhost)
String hostName = "localhost";
try {
    InetAddress addr = InetAddress.getLocalHost();
    String suggestedName = addr.getCanonicalHostName();
    // Rough test for IP address, if IP address assume a local lookup
    // on VPN
    if (!suggestedName.matches("(\\d{1,3}\\.?){4}") && !suggestedName.contains(":")) {
        hostName = suggestedName;
    }
} catch (UnknownHostException ex) {
}

System.out.println(hostName);
 The problem comes is that we have to trust the local machine settings, for example /etc/hostname, which can result in a network name that is not accessible from another machine. To counter this I wrote the following code to work over the available network interfaces to find a remotely addressable network address name that can be used to talk back to this machine. (I could use an IP address but they are harder to remember, particularly as we are moving towards IPv6)
String hostName = stream(wrap(NetworkInterface::getNetworkInterfaces).get())
        // Only alllow interfaces that are functioning
        .filter(wrap(NetworkInterface::isUp))
        // Flat map to any bound addresses
        .flatMap(n -> stream(n.getInetAddresses()))
        // Fiter out any local addresses
        .filter(ia -> !ia.isAnyLocalAddress() && !ia.isLinkLocalAddress() && !ia.isLoopbackAddress())
        // Map to a name
        .map(InetAddress::getCanonicalHostName)
        // Ignore if we just got an IP back
        .filter(suggestedName -> !suggestedName.matches("(\\d{1,3}\\.?){4}")
                                 && !suggestedName.contains(":"))
        .findFirst()
        // In my case default to localhost
        .orElse("localhost");

System.out.println(hostName);
 You might notice there a are a few support methods being used in there to tidy up the code, here are the required support methods if you are interested.
@FunctionalInterface
public interface ThrowingPredicate<T, E extends Exception>{

    boolean test(T t) throws E;
}

@FunctionalInterface
public interface ThrowingSupplier<T, E extends Exception>{

    T get() throws E;
}

public static <T, E extends Exception> Predicate<T> wrap(ThrowingPredicate<T, E> th) {
    return t -> {
        try {
            return th.test(t);
        } catch (Exception ex) {
            throw new RuntimeException(ex);
        }
    };
}

public static <T, E extends Exception> Supplier<T> wrap(ThrowingSupplier<T, E> th) {
    return () -> {
        try {
            return th.get();
        } catch (Exception ex) {
            throw new RuntimeException(ex);
        }
    };
}

// http://stackoverflow.com/a/23276455
public static <T> Stream<T> stream(Enumeration<T> e) {
    return StreamSupport.stream(
            Spliterators.spliteratorUnknownSize(
                    new Iterator<T>() {
                public T next() {
                    return e.nextElement();
                }

                public boolean hasNext() {
                    return e.hasMoreElements();
                }
            },
                    Spliterator.ORDERED), false);
}

Reference: Getting a name for someone to connect back to your server from our JCG partner Gerard Davison at the Gerard Davison’s blog blog.

Token Authentication for Java Applications

Building Identity Management, including authentication and authorization? Try Stormpath! Our REST API and robust Java SDK support can eliminate your security risk and can be implemented in minutes. Sign up, and never build auth again!
Update 5/12/2016: Building a Java application? JJWT is a Java library providing end-to-end JWT creation and verification, developed by our very own Les Hazlewood. Forever free and open-source (Apache License, Version 2.0), JJWT is simple to use and understand. It was designed with a builder-focused fluent interface hiding most of its complexity. We’d love to have you try it out, and let us know what you think! (And, if you’re a Node developer, check out NJWT!)
In my last post, we covered a lot of ground, including how we traditionally go about securing websites, some of the pitfalls of using cookies and sessions, and how to address those pitfalls by traditional means.
In this post we’ll go beyond the traditional and take a deep dive into how token authentication with JWTs (JSON Web Tokens) not only addresses these concerns, but also gives us the benefit of inspectable meta-data and strong cryptographic signatures.

Token Authentication to the Rescue!

Let’s first examine what we mean by authentication and token in this context.
Authentication is proving that a user is who they say they are.
A token is a self-contained singular chunk of information. It could have intrinsic value or not. We are going to look at a particular type of token that does have intrinsic value and addresses a number of the concerns with session IDs.

JSON Web Tokens (JWTs)

JWTs are a URL-safe, compact, self-contained string with meaningful information that is usually digitally signed or encrypted. They’re quickly becoming a de-facto standard for token implementations across the web.
URL-safe is a fancy way of saying that the entire string is encoded so there are no special characters and the token can fit in a URL.
The string is opaque and can be used standalone in much the same way that session IDs are used. By opaque, I mean that looking at the string itself provides no additional information.
However, the string can also be decoded to pull out-meta data and it’s signature can be cryptographically verified so that your application knows that the token has not been tampered with.

JWTs and OAuth2 Access Tokens

Many OAuth2 implementations are using JWTs for their access tokens. It should be stated that the OAuth2 and JWT specifications are completely separate from each other and don’t have any dependencies on each other. Using JWTs as the token mechanism for OAuth2 affords a lot of benefit as we’ll see below.
JWTs can be stored in cookies, but all the rules for cookies we discussed before still apply. You can entirely replace your session id with a JWT. You can then gain the additional benefit of accessing the meta-information directly from that session id.
In the wild, they look like just another ugly string:
eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJodHRwOi8vdHJ1c3R5YXBwLmNvbS8iLCJleHAiOjEzMDA4MTkzODAsInN1YiI6InVzZXJzLzg5ODM0NjIiLCJzY29wZSI6InNlbGYgYXBpL2J1eSJ9.43DXvhrwMGeLLlP4P4izjgsBB2yrpo82oiUPhADakLs
 If you look carefully, you can see that there are two periods in the string. These are significant as they delimit different sections of the JWT.
eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9
.
eyJpc3MiOiJodHRwOi8vdHJ1c3R5YXBwLmNvbS8iLCJleHAiOjEzMDA4MTkzODAsInN1YiI6InVzZXJzLzg5ODM0NjIiLCJzY29wZSI6InNlbGYgYXBpL2J1eSJ9
.
43DXvhrwMGeLLlP4P4izjgsBB2yrpo82oiUPhADakLs

JWT Structure

JWTs have a three part structure, each of which is base64-encoded:
jwt_parts
Here are the parts decoded:
Header
{
  "typ": "JWT",
  "alg": "HS256"
}
 Claims
{
  "iss":"http://trustyapp.com/",
  "exp": 1300819380,
  "sub": "users/8983462",
  "scope": "self api/buy"
}
 Cryptographic Signature
tß´—™à%O˜v+nî…SZu¯µ€U…8H×

JWT Claims

Let’s examine the claims sections. Each type of claim that is part of the JWT Specification can be found here.
iss is who issued the token.
exp is when the token expires.
sub is the subject of the token. This is usually a user identifier of some sort.
The above parts of the claim are all included in the JWT specification. scope is not included in the specification, but it is commonly used to provide authorization information. That is, what parts of the application the user has access to.
One advantage of JWTs is that arbitrary data can be encoded into the claims as with scope above. Another advantage is that the client can now react to this information without any further interaction with the server. For instance, a portion of the page may be hidden based on the data found in thescope claim.
NOTE: It is still critical and a best practice for the server to always verify actions taken by the client. If, for instance, some administrative action was being taken on the client, you would still want to verify on the application server that the current user had permission to perform that action. You would never rely on client side authorization information alone.
You may have picked up on another advantage: the cryptographic signature. The signature can be verified which proves that the JWT has not been tampered with. Note that the presence of a crytpographic signature does not guarantee confidentiality. Confidentiality is ensured only when the JWT is encrypted as well as signed.
Now, for the big kicker: statelessness. While the server will need to generate the JWT, it does not need to store it anywhere as all of the user meta-data is encoded right in to the JWT. The server and client could pass the JWT back and forth and never store it. This scales very well.

Managing Bearer Token Security

Implicit trust is a tradeoff. These types of tokens are often referred to as Bearer Tokens because all that is required to gain access to the protected sections of an application is the presentation of a valid, unexpired token.
You have to address issues like: How long should the token be good for? How will you revoke it? (There’s a whole other post we could do on refresh tokens.)
You have to be mindful of what you store in the JWT if they are not encrypted. Do not store any sensitive information. It is generally accepted practice to store a user identifier in the form of the subclaim. When a JWT is signed, it’s referred to as a JWS. When it’s encrypted, it’s referred to as a JWE.

Java, JWT and You!

We are very proud of the JJWT project on Github. Primarily authored by Stormpath’s own CTO, Les Hazlewood, it’s a fully open-source JWT solution for Java. It’s the easiest to use and understand library for creating and verifying JSON Web Tokens on the JVM.
How do you create a JWT? Easy peasy!
import io.jsonwebtoken.Jwts;
import io.jsonwebtoken.SignatureAlgorithm;
 
byte[] key = getSignatureKey();
 
String jwt = 
    Jwts.builder().setIssuer("http://trustyapp.com/")
        .setSubject("users/1300819380")
        .setExpiration(expirationDate)
        .put("scope", "self api/buy") 
        .signWith(SignatureAlgorithm.HS256,key)
        .compact();

The first thing to notice is the fluent builder api used to create a JWT. Method calls are chained culminating in the compact call which returns the final JWT string.
Also notice that when we are setting one of the claims from the specification, we use a setter. For example: .setSubject("users/1300819380"). When a custom claim is set, we use a call to put and specify both the key and value. For example: .put("scope", "self api/buy")
It’s just as easy to verify a JWT.
String subject = "HACKER";
try {
    Jws jwtClaims = 
        Jwts.parser().setSigningKey(key).parseClaimsJws(jwt);
 
    subject = claims.getBody().getSubject();
 
    //OK, we can trust this JWT
 
} catch (SignatureException e) {
 
    //don't trust the JWT!
}

If the JWT has been tampered with in any way, parsing the claims will throw a SignatureExceptionand the value of the subject variable will stay HACKER. If it’s a valid JWT, then subject will be extracted from it: claims.getBody().getSubject()

What is OAuth?

In the next section, we’ll look at an example using Stormpath’s OAuth2 implementation, which makes use of JWTs.
There’s a lot of confusion around the OAuth2 spec. That’s, in part, because it is really an über spec – it has a lot of complexity. It’s also because OAuth1.a and OAuth2 are very different beasts. We are going to look at a very specific, easy to use, subset of the OAuth2 spec. We have an excellent post that goes into much more detail on What the Heck is OAuth. Here, we’ll give some brief background and then jump right into the examples.
OAuth2 is basically a protocol that supports authorization workflows. What this means is that it gives you a way to ensure that a specific user has permissions to do something.
That’s it.
OAuth2 isn’t meant to do stuff like validate a user’s identity — that’s taken care of by an Authentication service. Authentication is when you validate a user’s identity (like asking for a username / password to log in), whereas authorization is when you check to see what permissions an existing user already has.
Just remember that OAuth2 is a protocol for authorization.

Using OAuth Grant Types for Authorization

Let’s look at a typical OAuth2 interaction.
POST /oauth/token HTTP/1.1
Origin: https://foo.com
Content-Type: application/x-www-form-urlencoded
 
grant_type=password&amp;username=username&amp;password=password
 grant_type is required. The application/x-www-form-urlencoded content type is required for this type of interaction as well. Given that you are passing the username and password over the wire, you would always want the connection to be secure. The good thing, however, is that the response will have an OAuth2 bearer token. This token will then be used for every interaction between the browser and server going forward. There is a very brief exposure here where the username and password are passed over the wire. Assuming the authentication service on the server verifies the username and password, here’s the response:
HTTP/1.1 200 OK
Content-Type: application/json;charset=UTF-8
Cache-Control: no-store
Pragma: no-cache
 
{
    "access_token":"2YotnFZFEjr1zCsicMWpAA...",
    "token_type":"example",
    "expires_in":3600,
    "refresh_token":"tGzv3JOkF0XG5Qx2TlKWIA...",
    "example_parameter":"example_value"
}
 Notice the Cache-Control and Pragma headers. We don’t want this response being cached anywhere. The access_token is what will be used by the browser in subsequent requests. Again, there is not direct relationship between OAuth2 and JWT. However, the access_token can be a JWT. That’s where the extra benefit of the encoded meta-data comes in. Here’s how the access token is leveraged in future requests:
GET /admin HTTP/1.1
Authorization: Bearer 2YotnFZFEjr1zCsicMW...
The Authorization header is a standard header. No custom headers are required to use OAuth2. Rather than the type being Basic, in this case the type is Bearer. The access token is included directly after the Bearer keyword. This completes the OAuth2 interaction for the password grant type. Every subsequent request from the browser can use the Authorizaion: Bearer header with the access token.
There’s another grant type known as client_credentials which uses client_id andclient_secret, rather than username and password. This grant type is typically used for API interactions. While the client id and slient secret function similarly to a username and password, they are usually of a higher quality security and not necessarily human readable.

Take Us Home: OAuth2 Java Example

We’ve arrived! It’s time to dig into some specific code that demonstrates JWTs in action.

Spring Boot Web MVC

There are a number of examples in the Stormpath Java SDK. Here, we are going to look at a Spring Boot Web MVC example. Here’s the HelloController from the example:
@RestController
public class HelloController {
 
    @RequestMapping("/")
    String home(HttpServletRequest request) {
 
        String name = "World";
 
        Account account = AccountResolver.INSTANCE.getAccount(request);
        if (account != null) {
            name = account.getGivenName();
        }
 
        return "Hello " + name + "!";
    }
 
}

The key line, for the purposes of this demonstration is:
Account account = AccountResolver.INSTANCE.getAccount(request);
Behind the scenes, account will resolve to an Account object (and not be null) ONLY if an authenticated session is present.

Build and Run the Example Code

To build and run this example, do the following:
☺ dogeared jobs:0 ~/Projects/StormPath/stormpath-sdk-java (master|8100m)
➥ cd examples/spring-boot-webmvc/
☺ dogeared jobs:0 ~/Projects/StormPath/stormpath-sdk-java/examples/spring-boot-webmvc (master|8100m)
➥ mvn clean package
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Stormpath Java SDK :: Examples :: Spring Boot Webapp 1.0.RC4.6-SNAPSHOT
[INFO] ------------------------------------------------------------------------
 
... skipped output ...
 
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 4.865 s
[INFO] Finished at: 2015-08-04T11:46:05-04:00
[INFO] Final Memory: 31M/224M
[INFO] ------------------------------------------------------------------------
☺ dogeared jobs:0 ~/Projects/StormPath/stormpath-sdk-java/examples/spring-boot-webmvc (master|8100m

Launch the Spring Boot Example

You can then launch the Spring Boot example like so:
☺ dogeared jobs:0 ~/Projects/StormPath/stormpath-sdk-java/examples/spring-boot-webmvc (master|8104m)
➥ java -jar target/stormpath-sdk-examples-spring-boot-web-1.0.RC4.6-SNAPSHOT.jar
 
  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v1.2.1.RELEASE)
 
2015-08-04 11:51:00.127  INFO 17973 --- [           main] tutorial.Application                     : Starting Application v1.0.RC4.6-SNAPSHOT on MacBook-Pro.local with PID 17973 
 
... skipped output ...
 
2015-08-04 11:51:04.558  INFO 17973 --- [           main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080 (http)
2015-08-04 11:51:04.559  INFO 17973 --- [           main] tutorial.Application                     : Started Application in 4.599 seconds (JVM running for 5.103)

NOTE: This assumes that you’ve already setup a Stormpath account and that your api keys are located in ~/.stormpath/apiKey.properties. Look here for more information on quick setup up of Stormpath with Spring Boot.

Authenticate with a JSON Web Token (or Not)

Now, we can exercise the example and show some JWTs in action! First, hit your endpoint without any authentication. I like to use httpie, but any command line http client will do.
➥ http -v localhost:8080
GET / HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Host: localhost:8080
User-Agent: HTTPie/0.9.2
 
 
HTTP/1.1 200 OK
Accept-Charset: big5, big5-hkscs, cesu-8, euc-jp, euc-kr, gb18030, ... 
Content-Length: 12
Content-Type: text/plain;charset=UTF-8
Date: Tue, 04 Aug 2015 15:56:41 GMT
Server: Apache-Coyote/1.1
 
Hello World!

The -v parameter produces verbose output and shows all the headers for the request and the response. In this case, the output message is simply: Hello World!. This is because there is not an established session.

Authenticate with the Stormpath OAuth Endpoint

Now, let’s hit the oauth endpoint so that our server can authenticate with Stormpath. You may ask, “What oauth endpoint?” The controller above doesn’t indicate any such endpoint. Are there other controllers with other endpoints in the example? No, there are not! Stormpath gives you oauth (and many other) endpoints right out-of-the-box. Check it out:
➥ http -v --form POST http://localhost:8080/oauth/token  \
&gt; 'Origin:http://localhost:8080' \
&gt; grant_type=password username=micah+demo.jsmith@stormpath.com password=
POST /oauth/token HTTP/1.1
Content-Type: application/x-www-form-urlencoded; charset=utf-8
Host: localhost:8080
Origin: http://localhost:8080
User-Agent: HTTPie/0.9.2
 
grant_type=password&amp;username=micah%2Bdemo.jsmith%40stormpath.com&amp;password=
 
HTTP/1.1 200 OK
Cache-Control: no-store
Content-Length: 325
Content-Type: application/json;charset=UTF-8
Date: Tue, 04 Aug 2015 16:02:08 GMT
Pragma: no-cache
Server: Apache-Coyote/1.1
Set-Cookie: account=eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiIxNDQyNmQxMy1mNThiLTRhNDEtYmVkZS0wYjM0M2ZjZDFhYzAiLCJpYXQiOjE0Mzg3MDQxMjgsInN1YiI6Imh0dHBzOi8vYXBpLnN0b3JtcGF0aC5jb20vdjEvYWNjb3VudHMvNW9NNFdJM1A0eEl3cDRXaURiUmo4MCIsImV4cCI6MTQzODk2MzMyOH0.wcXrS5yGtUoewAKqoqL5JhIQ109s1FMNopL_50HR_t4; Expires=Wed, 05-Aug-2015 16:02:08 GMT; Path=/; HttpOnly
 
{
    "access_token": "eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiIxNDQyNmQxMy1mNThiLTRhNDEtYmVkZS0wYjM0M2ZjZDFhYzAiLCJpYXQiOjE0Mzg3MDQxMjgsInN1YiI6Imh0dHBzOi8vYXBpLnN0b3JtcGF0aC5jb20vdjEvYWNjb3VudHMvNW9NNFdJM1A0eEl3cDRXaURiUmo4MCIsImV4cCI6MTQzODk2MzMyOH0.wcXrS5yGtUoewAKqoqL5JhIQ109s1FMNopL_50HR_t4",
    "expires_in": 259200,
    "token_type": "Bearer"
}

There’s a lot going on here, so let’s break it down.
On the first line, I am telling httpie that I want to make a form url-encoded POST – that’s what the --form and POST parameters do. I am hitting the /oauth/token endpoint of my locally running server. I specify an Origin header. This is required to interact with Stormpath for the security reasons we talked about previously. As per the OAuth2 spec, I am passing up grant_type=password along with ausername and password.
The response has a Set-Cookie header as well as a JSON body containing the OAuth2 access token. And guess what? That access token is also a JWT. Here are the claims decoded:
{
  "jti": "14426d13-f58b-4a41-bede-0b343fcd1ac0",
  "iat": 1438704128,
  "sub": "https://api.stormpath.com/v1/accounts/5oM4WI3P4xIwp4WiDbRj80",
  "exp": 1438963328
}
 Notice the sub key. That’s the full Stormpath URL to the account I authenticated as. Now, let’s hit our basic Hello World endpoint again, only this time, we will use the OAuth2 access token:
➥ http -v localhost:8080 \
&gt; 'Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiIxNDQyNmQxMy1mNThiLTRhNDEtYmVkZS0wYjM0M2ZjZDFhYzAiLCJpYXQiOjE0Mzg3MDQxMjgsInN1YiI6Imh0dHBzOi8vYXBpLnN0b3JtcGF0aC5jb20vdjEvYWNjb3VudHMvNW9NNFdJM1A0eEl3cDRXaURiUmo4MCIsImV4cCI6MTQzODk2MzMyOH0.wcXrS5yGtUoewAKqoqL5JhIQ109s1FMNopL_50HR_t4'
GET / HTTP/1.1
Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiIxNDQyNmQxMy1mNThiLTRhNDEtYmVkZS0wYjM0M2ZjZDFhYzAiLCJpYXQiOjE0Mzg3MDQxMjgsInN1YiI6Imh0dHBzOi8vYXBpLnN0b3JtcGF0aC5jb20vdjEvYWNjb3VudHMvNW9NNFdJM1A0eEl3cDRXaURiUmo4MCIsImV4cCI6MTQzODk2MzMyOH0.wcXrS5yGtUoewAKqoqL5JhIQ109s1FMNopL_50HR_t4
Connection: keep-alive
Host: localhost:8080
User-Agent: HTTPie/0.9.2
 
 
 
HTTP/1.1 200 OK
Content-Length: 11
Content-Type: text/plain;charset=UTF-8
Date: Tue, 04 Aug 2015 16:44:28 GMT
Server: Apache-Coyote/1.1
 
Hello John!
Notice on the last line of the output that the message addresses us by name. Now that we’ve established an authenticated session with Stormpath using OAuth2, these lines in the controller retrieve the first name:
Account account = AccountResolver.INSTANCE.getAccount(request);
if (account != null) {
    name = account.getGivenName();
}

Summary: Token Authentication for Java Apps

In this post, we’ve looked at how token authentication with JWTs not only addresses the concerns of traditional approaches, but also gives us the benefit of inspectable meta-data and strong cryptographic signatures.
We gave an overview of the OAuth2 protocol and went through a detailed example of how Stormpath’s implementation of OAuth2 uses JWTs.
Here are some other links to posts on token based authentication, JWTs and Spring Boot:
Building Identity Management, including authentication and authorization? Try Stormpath! Our REST API and robust Java SDK support can eliminate your security risk and can be implemented in minutes. Sign up, and never build auth again!

The 12 Step Program to Realizing Your Java Monitoring is Flawed

What are some of the biggest problems with the current state of Java monitoring?
Errors in production are much like drunk texting. You only realize something went wrong after it had already happened. Texting logs are usually more amusing than application error logs, but… both can be equally hard to fix.
In this post we’ll go through a 12 step monitoring flaws rehab program. A thought experiment backed by the experience of Takipi’s users with some of the most common problems that you’re likely to encounter – And what you can do about them.
Let’s roll.

Step #1: Admitting that we have a problem

In fact, it’s only one problem on a higher level, application reliability. To be able to quickly know when there’s something wrong with the application, and having quick access to all the information you need in order to fix it.
When we take a step closer, the reliability problem is made up of many other symptoms with the current state of monitoring and logging. These are thorny issues that most people try to bury or avoid altogether. But in this post, we’re putting them in the spotlight.
Bottom line: Troubleshooting and handling new errors that show up in production is unavoidable.

Step #2: Shutting down monitoring information overload

A good practice is to collect everything you can about your application, but that’s only useful when the metrics are meaningful. Inconsistent logging and metrics telemetry generate more noise when their actionability is just an afterthought. Even if they result in beautiful dashboards.
A big part of this is misusing exceptions and logged errors as part of the application’s control flow, clogging up logs with the paradox of “normal” exceptions. You can read more about this in the recent eBook we released right here.
As the cost for monitoring and data retention goes lower, the problem shifts to collecting actionable data and making sense of it.
Bottom line: Even though it’s gradually getting easier to log and report on everything, error root cause discovery is still mostly manual, the haystack gets bigger and the needle is harder to find.

Step #3: Avoiding tedious log analysis

Let’s assume we have some error, a specific transaction that fails some of the time. We now have to find all the relevant information about it in our log files. Time to grep our way through the logs, or play around with different queries in tools that make the search quicker like Splunk, ELK, or other log management tools.
To make this process easier, developers who use Takipi are able to extend the context of each logged error, warning and exception into the source, state and variable state that caused it. Each log line gets a link appended to it that leads to the event’s analysis in Takipi:
LogLink
Bottom line: Manually sifting through logs is a tedious process that can be avoided.

Step #4: Realizing that production log levels aren’t verbose enough

Log levels are a double edged sword. The more levels you log in production, the more context you have. BUT, the extra logging creates overhead that is best to avoid in production. Sometimes, the additional data you need would exist in a “DEBUG” or an “INFO” message, but production applications usually only write “WARN” level messages and above.
The way we solve this in Takipi is with a recently released new feature that gives you the ability to see the last 250 log statements within the thread leading up to an error. Even if they were not written to the log file in production.
Wait, what? Logless logging with no additional overhead. Since log statements are captured directly in-memory, without relying on log files, we’re able to do full verbosity, in production, without affecting log size or creating overhead. You can read more about it right here, or try it yourself.
Bottom line: As of today you’re not limited to WARN and above levels in production logging.

Step #5: Next person who says “cannot reproduce” buys a round of drinks

Probably the most common excuse for deferring a bug fix is “can’t reproduce”. An error that lacks the state that cause it. Something bad happens, usually you first hear about it from an actual user, and can’t recreate it or find evidence in the logs / collected metrics.
The hidden meaning of “can’t reproduce” is right at the source. If you’re used to first hearing about errors from users, there might be something wrong with the way errors are tracked internally. With proper monitoring in place, it’s possible to identify and solve errors before actual users report them.
Bottom line: Stop reproducing “cannot reproduce”.
reproduce

Step #6: Breaking the log statements redeploy cycle

A common infamous and unfortunate cure for “cannot reproduce” is adding additional logging statements in production and hoping for the bug to happen again.
In production.
Messing up real users.
That’s the production debugging paradox right there. A bug happens, you don’t have enough data to solve it (but you do have lots of noise), adding logging statements, build, test (the same test that missed the bug in the first place), deploy to production, hope for it to happen again, hope for the new data to be enough or… repeat.
Bottom line: The ultimate goal for a successful monitoring strategy would be to prevent this cycle from happening.

Step #7: APM + Trackers + Metrics + Logs = Limited visibility

Let’s step it up a notch. We’ve covered logs and dashboard reporting metrics, now it’s time to add error tracking tools and APMs to the mix.
The fact is that even when a monitoring stack includes a solution from all 4 categories, the visibility you’re getting into application errors is limited. You’ll see the stack trace of the transaction, or at most specific predefined hardcoded variables. Traditional monitoring stacks have no visibility to the full state of the application at moment of error.
Bottom line: There’s a critical missing component in today’s common monitoring stack. Variable level visibility for production debugging.

Step #8: Preparing for distributed error monitoring

Monitoring doesn’t stop on the single server level, especially with microservice architectures where an error that formed on one server could be causing trouble elsewhere.
While microservices promote the “Separation of Concerns” principle, they’re also introducing a plethora of new problems at a server level scale. In this previous post we covered these issues and offered possible solution strategies.
Bottom line: Any monitoring solution should take distributed errors into account and be able to stitch in troubleshooting data from multiple sources.

Step #9: Find a way around long troubleshooting cycles

Whether it’s an alerting issue or simply a matter of priorities, for most applications the troubleshooting cycle takes days, weeks or even months after the first error was introduced. The person who reported the error might be unreachable or worse, the relevant data could be long gone / rolled over due to data retention policies.
The ability to freeze a snapshot of the application state at moment of error, even if it comes from multiple services / sources is critical in this case, otherwise the important data can be lost.
Bottom line: Long troubleshooting cycles should be avoided.

Step #10: Acknowledge the dev vs ops dilemma

Keeping up with release cycle issues, we’re all on the same boat, BUT, developers want to release features faster while operations would rather keep the production environment stable.
Short feature cycles and long troubleshooting cycles just don’t go together. There should be a balance between the two. Monitoring is a team sport, and the tools have to know how to speak to each other. For example, at Takipi you’re able to get alerts on Slack, Pagerduty or Hipchat, and directly open a JIRA ticket with all the available error analysis data.
Bottom line: Collaborative workflows speed up issue resolution times.

Step #11: There’s hope

Modern developer tools are taking big steps to improve on the current state of monitoring. Whether it’s in the field of logs, application performance management or the new categories that are in the works.
Bottom line: Keep an eye out for developments in the tooling ecosystem and best practices from other companies.

Step #12: Spread the word

Monitoring is an inseparable part of software development, let’s keep the discussion going!
We hope you’ve enjoyed this overview / rant of some of the main problems with the current state of monitoring. Are there any other issues with monitoring that keep you up at night?
Please feel free to share them in the comments section below.

It’s easy to document your Play Framework REST API with Swagger

I hav­ing been using Play Frame­work as a Java-based, lightning-fast REST back­end frame­work for sev­eral projects. Later, I was was excited to find Swag­ger and worked to inte­grate it into a few projects. As I strug­gled with it the first time, I thought it would be use­ful to share my expe­ri­ence and cre­ate a “how-to” arti­cle describ­ing the steps to suc­ceed quickly.
In order to sim­plify things, I have started off with an exist­ing Play Frame­work, Java, JPA, REST project cre­ated by James Ward . James’ project is located on GitHub so you should pull it before you start with this how-to.

How-To Steps

  1. First, add the depen­den­cies to your build.sbt. I was able to solve a depen­dency prob­lem with the ver­sion of Play Frame­work 2.3.0 used in the sam­ple project and swagger-core by refer­ring to GitHub issue 55o: https://​github​.com/​s​w​a​g​g​e​r​-​a​p​i​/​s​w​a​g​g​e​r​-​c​o​r​e​/​i​s​s​u​e​s​/​550.
    "com.wordnik" %% "swagger-play2" % "1.3.12" exclude("org.reflections", "reflections"), 
    "org.reflections" % "reflections" % "0.9.8" notTransitive (), 
    "org.webjars" % "swagger-ui" % "2.1.8-M1"
  2. Add this to your configu­ra­tion application.conf :
    api.version="1.0" swagger.api.basepath="http://localhost:9000"
  3. Add the api-docs routes to the routes file:
  4. GET / controllers.Assets.at(path="/public", file="index.html")
    
    GET /api-docs controllers.ApiHelpController.getResources
    
    POST /login controllers.SecurityController.login() POST /logout controllers.SecurityController.logout()
    
    GET /api-docs/api/todos controllers.ApiHelpController.getResource(path = "/api/todos") 
    GET /todos controllers.TodoController.getAllTodos() 
    POST /todos controllers.TodoController.createTodo()
    
    # Map static resources from the /public folder to the /assets URL path 
    GET /assets/*file controllers.Assets.at(path="/public", file)
  5. Add Swag­ger Anno­ta­tions to the ToDoCon­troller ( @Api ):
    @Api(value = "/api/todos", description = "Operations with Todos") @Security.Authenticated(Secured.class) public class TodoController extends Controller {
    Then the Anno­ta­tions for the GET and POST methods:
    @ApiOperation(value = "get All Todos",
         notes = "Returns List of all Todos",
         response = Todo.class, 
         httpMethod = "GET") 
    public static Result getAllTodos() { 
         return ok(toJson(models.Todo.findByUser(SecurityController.getUser()))); 
    }
    @ApiOperation( 
         nickname = "createTodo", 
         value = "Create Todo", 
         notes = "Create Todo record", 
         httpMethod = "POST", 
         response = Todo.class
     ) 
    @ApiImplicitParams( 
         { 
              @ApiImplicitParam( 
                   name = "body", 
                   dataType = "Todo", 
                   required = true, 
                   paramType = "body", 
                   value = "Todo" 
              ) 
         } 
    ) 
    @ApiResponses( 
              value = { 
                      @com.wordnik.swagger.annotations.ApiResponse(code = 400, message = "Json Processing Exception") 
              } 
    ) 
    public static Result createTodo() { 
         Form<models.Todo> form = Form.form(models.Todo.class).bindFromRequest(); 
         if (form.hasErrors()) { 
             return badRequest(form.errorsAsJson()); 
         } 
         else { 
              models.Todo todo = form.get(); 
              todo.user = SecurityController.getUser(); 
              todo.save(); 
              return ok(toJson(todo)); 
         } 
    }
  6. Start the appli­ca­tion and go to this URL in your browser:

    http://localhost:9000/assets/lib/swagger-ui/index.html?/url=http://localhost:9000/api-docs

Source Code

As men­tioned in the begin­ning, I started with James Ward’s play-rest-security on githuband made these mod­ifi­ca­tions on my fork. For all who are inter­ested, here is the source code: