Guice, Jetty, Jersey and Shiro

Well, a lot of loosely related words in a title. But this set makes sense. I got quite a headache when I tried to figure out all those components at once. As finally came out, it was probably just because of my lame knowledge of whole Servlet framework. Anyway, I would like to share few remarks which helped me to understand what is going on.

Disclaimer: sorry for the poor quality of this post. It was created primarily as a way to organize all pieces of information which I had and to help me understand the whole system. 

So I have an application. Nothing fancy, few REST resources. It uses three first frameworks mentioned in the subject. I had a rough idea what does what, but I didn’t know demarcation lines, and when I had to plug in the last one it got a bit messy. Especially that lot of things in those frameworks are kind of hidden from the developer.

As I mentioned, the application consists few REST Resources served by jersey with use of Guice. But wait, Guice? It’s dependency injection framework, what the hell is it content part? The answer is, Guice is providing GuiceFilter, which, when placed on top of web.xml (or on top of other servlet filters configuration) will serve matching requests some injected content after you inject Guice’s ServletModule. The mentioned module is an alternative for web.xml. All further Filters and Servlets may be declared in code. For more information about that, I recommend reading https://github.com/google/guice/wiki/ServletModule

Okay, enough about Guice, what about Jetty? What it’s doing in this set? Jetty is lightweight, fast Java HTTP server. It can nicely be embedded into an application, what is beneficial for both clarity and complexity. For me, a little annoying thing when working with it was a fact that almost all of the webapp tutorials and issues on the internet are written in classical, deployable way, what makes a bit harder to find proper solutions quickly. Well, and here we have an example 🙂

What Jetty does in this setup is launching server itself and launching GuiceFilter. Note, that things that typically are done in ServletContextListener should be done in the application before server starting (what is obvious, but easy to overlook).

Next part of this puzzle is Jersey. Jersey is JAX-RS implementation. So, long story short, it provides REST part of an application: Resources, paths, available methods and finally also authorization. Especially last part is noteworthy because authentication is a task of HTTP Server (at least in last iteration of application I am writing).

And the latter is Shiro. It’s a new element in the application, and I want to fit it to allow seamless migration to other protocols. How should it be connected to other parts of the system? As it has excellent integration also with web containers, there is ShiroWebContainer which you can extend to get your configuration working. But here we have some problems: you cannot install Shiro module in Guice’s ServletModule. Why? Because when ServletModule is created directly (not via GuiceServletContextListener, what is a case when using standard webapp deployment pipeline), it doesn’t have access to ServletContext. It has getServletContext() method, but it always returns null. Well, aftermatch everything makes sense, but it was quite confusing for a while.

So the resolution is very easy: just put your Jetty server out of Guice’s tree. And from server handler get ServletContext – and use it as a parameter to main application injector creation. And that’s it – as simple as that. After setting up authorization part (this task was relegated from Jetty) you can change all annotations from Jersey’s to Shiro’s (this part was taken apart Jersey). And now migration to new protocol will be painless, at least in the security domain.

Compare automatically generated files

During working on https://github.com/dervan/avro-fastserde I had to compare automatically generated code files. I had to directories with files generated by new and old version. Some of the files were generated more than one time, but with the same content. There was also a difference in random numbers, which were added on end of variables name. It’s pretty simple, but I like my one-liner, so I would love to save it here.

Building blocks are:

a) to find java files, each name just once: 

find ./old/ -name "*.java" -printf "%f\n" | sort | uniq

b) regex to change a random number to string “ID”:

%s/\([a-z]\)[0-9]\+/\1ID/ge

c) vim flag to quietly run a command in all windows after files loading:

 vim -c ':windo silent command' files

d) find a flag to quit after first finding:

 find ./old -name "$file" -print -quit`

And together:

for file in `find ./old/ -name "*.java" -printf "%f\n" | sort | uniq`; do vimdiff +':windo silent %s/\([a-z]\)[0-9]\+/\1ID/ge' +':diffupdate' `find ./old/ -name "$file" -print -quit` `find ./new/ -name "$file" -print -quit`; done;

Well, and problem solved in only 210 chars 🙂

From photon to byte – part 1

Oh, of course, everybody knows how a point is transformed to a pixel. But personally for a pretty long time I had more or less blurred vision of what exactly is done by which matrix. Maybe it would be a useful to write it down in a consistent way? In next few artictles I would like to show a way of data from photons in 3D world to precise infromation in computer memory. So let’s start!

In our example we will use some example points which will be just cube. So lets start with some code to draw few points. We will use matplotlib to see real coordinates.


import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
r = [-1, 1]
cube = [[x,y,z] for x in r for y in r for z in r]
C = np.array([3*np.sqrt(3), 3/np.sqrt(2), 3/np.sqrt(2)])
camera_dir = [C, 0.8*C]
edges = [[x,x+2**d] for x in range(0,8) for d in range(0,3) if not (2**d)&x]
edges.append([len(cube), len(cube)+1])
cube.extend(camera_dir)
def plot(points):
    fig = plt.figure()
    ax = fig.add_subplot(111, projection='3d')
    ax.set_aspect("equal")
    for e in edges:
        ax.plot3D(*zip(points[e[0]], points[e[1]]))
    ax.auto_scale_xyz([-4,4],[-4, 4], [-4, 4])
    ax.set_aspect("equal")
    plt.show()
plot(cube)

Okay, so we’re ready. On a start we have some scene with 3D points which emit photons in direction of camera (that’s why we see them).  Unfortunately, we want to look at the world from perspective of a camera but usually we have another point of reference than a center of our camera (let’s say, corner of desk of room). Because all later computations need to be in a camera-centered frame of reference we have to transform scene in some way. And here the first matrix appears: K.

K is the extrinsic parameters matrix. It translates whole world to world where point \([0,0,0]\) is center of camera and camera looks exactly in direction of Z axis. How we can do this transition? The easiest way is to first rotate points to get Z axis aligned with camera direction and then shift all points to align camera to coordinate system center. Why? Because as usual we’re working with homogeneous coordinates what means that our point \([x, y, z]\) in fact is point \([x, y, z, 1]\). Thanks to it our translation is pretty easy – after 3×3 rotation matrix \(R\) we’re adding column with proper translation – and it works like a charm:

$$ [ R | t ] \cdot [x, y, z, 1] = [ R | t ] \cdot ([x, y, z, 0] + [0, 0, 0, 1]) = R \cdot [x, y, z] + t $$

Okay, so on the start we need to get rotation matrix. Easiest way to do that is to compose it from three matrices, each represents rotation around one axis. For the spicy math details I will direct you to wikipedia.


def rotation2d(angle):
    return np.array([[np.cos(angle), -np.sin(angle)], [np.sin(angle), np.cos(angle)]])
def rotation3d(angle_x, angle_y, angle_z):
    Rx = np.identity(3)
    Ry = np.identity(3)
    Rz = np.identity(3)
    Rx[1:,1:] = rotation2d(angle_x)
    Ry[[[0,2],[0,2]],[[0,0],[2,2]]] = rotation2d(angle_y)
    Rz[:2,:2] = rotation2d(angle_z)
    return np.dot(Rz, np.dot(Ry, Rx))

Okay, so now we can calculate proper angles for our camera. After moment of thinking we may find out that firstly we need to rotate them by \(\pi/4\) around \(x\) axis, then \( \pi/6 \) around \(y\) to get vector pointing in X direction. But we want Z direction, not X – so we’re adding \(-\pi/2 \) of rotation in \(y\) direction. And we have proper direction, nice.

What about shift? Unfortunately we rotated all points so we don’t know how to shift whole scene to set camera center in (0,0,0). Or do we? Of course it’s enough to get opposite of rotated vector C:  \(t = -R \cdot C\)


R = rotation3d(np.pi/4, np.pi/6 - np.pi/2, 0)
t = -np.dot(R, C)
K = np.hstack([R, t.reshape(3,1)])

After quick verification:


>>>np.dot(K, np.append(Cp, 1))
array([ 0., 0., 0.])

We may be happy that everything works. Now transform whole scene:


def to_hmg(arr):
    return np.append(arr, np.ones((len(arr),1)), axis=1)
def from_hmg(arr):
    return arr[:,:-1]
transformed_cube = [np.dot(K, point) for point in to_hmg(cube)]
plot(transformed_cube)

Oh, great, it works! We transformed whole world to place where camera is center of frame. How data is processed next? From this moment we start to think ‘as a camera’ and we will map points to pixels. But it’s a task of another matrix – and I would describe in next post.