Ratpack has a core class that is the center of the great asynchronous support, Promise.

Ratpack Promises are very easy to work with, there are just a few key points:

  • Only attach to a promise one time
  • If dealing with the error case it must be done before the success case
  • They are Lazy
  • In Groovy we depend on Implicit Closure Coercion to change our closures to an Action.

Happy Path

Consuming Value from Promise
1
2
3
4
5
Promise promise = somethingThatReturnsPromise()

promise.then {
  println it
}

What we are doing here is giving a closure to the promise that once the value is ready the closure will be called with the value passed in as a parameter. We can also be very explicit in what we are getting back from the promise.

Explicit Value from Promise
1
2
3
4
5
6
7
def p = httpClient.get {
  it.url.set(new URI("http://example.com"))
}

p.then { ReceivedResponse receivedResponse ->
  println receivedResponse.statusCode
}

If some error occurs while trying to get the value for the then block the exception will be thrown. Which can be picked up by some error handler down the chain.

Error Callback

So for this works great when dealing with the happy path and wanting exceptions. But we also may want to deal with failures to fulfill the promise. So to do this we start with onError instead of then.

Ratpack Promise with Failure Path
1
2
3
4
5
6
7
httpClient.get {
    it.url.set(new URI("http://example.com"))
} onError {
    println "Something when wrong: ${it.message}"
} then {
    render "Got a ${it.statusCode} status with body of:\n\n${it.body.text}"
}

onError will pass in a throwable to the closure that you can log or do whatever work you would like in the case of a failure.

Lazy Promises

Ratpack promises won’t actually try to generate the value until the then block is called at the end of the current execution. This is done to allow for deterministic asynchronous operations.

Deterministic Promise
1
2
3
4
5
def doWork() {
  httpClient.get {  }.then {  }
  sleep 5000
  throw new RuntimeException("bang!")
}

What will happen in Ratpack is we will always get the exception “bang!”, because the get request will not even get started until the doWork block of execution is finished. Once finished having a then{} will trigger a background thread to start generating the value.

What not to do

You shouldn’t try to attach more than once to a Promise, as what ends up happening is two different promise instances will execute in the background and what we want is only to deal with that value once. So don’t do the following:

Don’t do this
1
2
3
4
5
6
7
8
9
10
11
def p = httpClient.get {
  it.url.set(new URI("http://example.com"))
}

p.onError {
  println it
}

p.then {
  println it.statusCode
}

Starting in Ratpack 0.9.9 the above code should actually throw an error.

Cassandra inserts and updates should always be modeled as upserts when possible. Using the query builder in the Java native driver there isn’t a direct upsert called out, but we can do updates instead of inserts for all cases. The update acts as an upsert and it reduces the number of queries you will need to build.

1
2
3
4
Statement upsert = QueryBuilder.update("table")
        .with(QueryBuilder.append("visits", new Date())) //Add to a CQL3 List
        .where(QueryBuilder.eq("id", "MyID"));
session.execute(upsert);

Above you can see how we model our “upsert”. If a value isn’t found for the given where clause it will insert it.

You must use all parts of a Primary Key for an updates where cluase given a CQL Table with a compound key:

1
2
3
4
5
6
create table tablex(
     pk1 varchar,
     pk2 varchar,
     colA varchar,
     PRIMARY KEY(pk1,pk2)
);

We can not do the following query:

1
2
3
Statement upsert = QueryBuilder.update("tablex")
                .with(QueryBuilder.set("colA", "2"))
                .where(QueryBuilder.eq("pk1", "1"));

You will get an InvalidQueryException:

1
2
3
4
com.datastax.driver.core.exceptions.InvalidQueryException: Missing mandatory PRIMARY KEY part pk2
  com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:35)
  com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:256)
  com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:172)

But the following will upsert:

1
2
3
4
Statement upsert = QueryBuilder.update("tablex")
        .with(QueryBuilder.set("colA", "2"))
        .where(QueryBuilder.eq("pk1", "1"))
        .and(QueryBuilder.eq("pk2", "2"));

If you are working on a Groovy script with @Grab, you will sometimes get download failures for dependencies. Such as the following:

1
General error during conversion: Error grabbing Grapes -- [download failed: com.google.guava#guava;16.0!guava.jar(bundle), download failed: org.javassist#javassist;3.18.1-GA!javassist.jar(bundle)]

This issues may have nothing to do with the actual dependency but an issue in your local m2 cache. The quick answer is to just delete ~/.groovy/grapes and ~/.m2/repository. But doing this will force you to re-download dependencies.

To only delete the cache for items giving you an issue you just need to delete the correct directories in both m2 and grapes cache. So for our Guava example you would do the following:

1
2
rm -r ~/.groovy/grapes/com.google.guava
rm -r ~/.m2/repository/com/google/guava

After that you should be able to run the groovy script normally.

New Relic with Grails by default will trace most web transactions through the controller but will not trace down into services. While most true work of a request belongs in services or libraries the default tracing leaves something to be desired.

This is easily fixed by adding New Relic annotations to services and libraries.

BuildConfig.groovy Changes

1
2
3
4
dependencies {
  compile 'com.newrelic.agent.java:newrelic-api:3.4.2'

}

Service Changes

1
2
3
4
5
6
7
8
import com.newrelic.api.agent.Trace

class SubscriptionService {

  @Trace
  def save(Subscription subscription) {
    //Work Here
  }

At this point your code is ready to give more detailed transactions, but the agent on the server must also be configured to accept custom tracing. The config option for this is not available from the web so you must update the newrelic.yml file. Set enable_custom_tracing to true.

1
2
  #enable_custom_tracing is used to allow @Trace on methods
  enable_custom_tracing: true

Now you will get any custom tracing added to your application as well as custom tracing from libraries.

If you are running grails 2.3.1 and see the following sequence pop up before you get some odd test failures.

1
2
3
4
5
6
7
$ grails clean
| Application cleaned.

$ grails test-app
| Environment set to test.....
| Warning No config found for the application.
| Warning DataSource.groovy not found, assuming dataSource bean is configured by Spring

Start using package in between and the problem will go away.

1
2
3
4
5
6
7
8
9
10
11
12
13
$ grails clean
| Application cleaned.
$ grails package
| Compiling 10 source files
| Compiling 12 source files.....

$ grails test-app
| Environment set to test.....
| Server running. Browse to http://localhost:8080/api
| Running 6 cucumber tests...
| Completed 6 cucumber tests, 0 failed in 0m 3s
| Server stopped
| Tests PASSED

Using the JMS 1.2 plugin with Grails 2.3.0.RC1 was producing a number of odd results. Mostly with missing JMS files it turns out that the new spring version didn’t have the needed spring jms included. Just add the following to BuildConfig.groovy

1
2
3
4
dependencies {
  compile 'org.springframework:spring-jms:3.2.4.RELEASE'
  ...
}

Using the Grails Spring Security Core Plugin I found the need to customize the UserDetailsService and use a Grails service. (Part of the roles logic depended on an external API that we already had a service for.) This was easy to accomplish by subclassing the UserDetailsService class I wanted as a base in my case it was actually the SpringSamlUserDetailsService class because I was using the SAML plugin but normally you would subclass GormUserDetailsService. A great starting example is given in the documentation here.

The difference in my case was the need to use the Grails service, I went with providing the service in the resources.groovy file. Below is the example file of what I used.

My resources.groovy

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
import com.example.saml.CustomUserDetailsService
import org.codehaus.groovy.grails.plugins.springsecurity.SpringSecurityUtils

beans = {
      userDetailsService(CustomUserDetailsService) {
       grailsApplication = ref('grailsApplication')
       myService = ref('myService')  //Here we give the reference to the service we want available.
       authorityClassName = SpringSecurityUtils.securityConfig.authority.className
       authorityJoinClassName = SpringSecurityUtils.securityConfig.userLookup.authorityJoinClassName
       authorityNameField = SpringSecurityUtils.securityConfig.authority.nameField
       samlAutoCreateActive = SpringSecurityUtils.securityConfig.saml.autoCreate.active
       samlAutoAssignAuthorities = SpringSecurityUtils.securityConfig.saml.autoCreate.assignAuthorities as Boolean
       samlAutoCreateKey = SpringSecurityUtils.securityConfig.saml.autoCreate.key as String
       samlUserAttributeMappings = SpringSecurityUtils.securityConfig.saml.userAttributeMappings
       samlUserGroupAttribute = SpringSecurityUtils.securityConfig.saml.userGroupAttribute as String
       samlUserGroupToRoleMapping = SpringSecurityUtils.securityConfig.saml.userGroupToRoleMapping
       userDomainClassName = SpringSecurityUtils.securityConfig.userLookup.userDomainClassName
       authoritiesPropertyName = SpringSecurityUtils.securityConfig.userLookup.authoritiesPropertyName
   }
}

Snip from CustomUserDetailsService.groovy

1
2
3
4
class CustomUserDetailsService extends SpringSamlUserDetailsService {
  def myService
...
}

Getting, SAML message intended destination endpoint did not match recipient endpoint, errors mean the server itself dosen’t match the urls being given in the SAML messages.

We are using the Grails Spring Security SAML Plugin on a Tomcat server. In my case this was happening because we were doing SSL offloading on the load balancer. So if you look at the logs there should be an error log with the intended destination and the recipient endpoint.

In my case the first error was only different by http vs https. The fix for that was simply to apply the scheme attribute to that connector in tomcat. At which point everything was matching except that the port was now being added as 80 in my endpoint and that wasn’t in the intended endpoint. The fix for this was just to add the proxyPort to the connector as well.

So to fully support the OpenSAML on tomcat with SSL offloading I configured the connector as seen below. Take note of the scheme and proxyPort being set.

1
2
3
4
5
6
<Connector port="8080" protocol="HTTP/1.1"
               enableLookups="false"
               maxThreads="250"
               connectionTimeout="20000"
               scheme="https"
               proxyPort="443"/>

While working with Grails and the Spring Security plugin, the current spring security filter chain is available in the springSecurityFilterChain bean. It is very easy with that to show what the current chain looks like so you can work through filter chain issues. I used the following code in the Grails Console plugin to get the bean:

1
def filterChain = ctx.getBean('springSecurityFilterChain')

Also if you want to poke around the other beans available this is a great post to check out: Spring Beans from the Grails Console .

I’ve been working with the DataStax Enterprise 2.01 install for a bit now and it was working great until one day I was no longer able to get any queries to work using the cqlsh I was just getting the error that one or more nodes was unavailable. I tried restarting and still nothing would work I got a few errors in the logs (shown below).

I was able to quickly fix the error by removing my data directory and starting fresh as this is just my development environment that works great for me. You can find your data directory in the cassandra.yaml file ($DSE_HOME/resources/cassandra/conf/cassandra.yaml), look for the data_file_directories entry. Mine was set to /var/lib/cassandra/data so I just ran the following and started cassandra fresh and everything is back to working order.

1
rm -r /var/lib/cassandra/data
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
INFO [JOB-TRACKER-INIT] 2012-12-28 10:32:32,515 JobTracker.java (line 2427) problem cleaning system directory: cfs:/tmp/hadoop-jeffbeck/mapred/system
java.io.IOException: UnavailableException()
  at com.datastax.bdp.hadoop.cfs.CassandraFileSystemThriftStore.listSubPaths(CassandraFileSystemThriftStore.java:1137)
  at com.datastax.bdp.hadoop.cfs.CassandraFileSystem.listStatus(CassandraFileSystem.java:192)
  at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2392)
  at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2195)
  at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2189)
  at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:303)
  at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:294)
  at org.apache.hadoop.mapred.HadoopTrackerPlugin$1.run(HadoopTrackerPlugin.java:230)
  at java.lang.Thread.run(Thread.java:680)
Caused by: UnavailableException()
  at org.apache.cassandra.service.ReadCallback.assureSufficientLiveNodes(ReadCallback.java:212)
  at org.apache.cassandra.service.StorageProxy.scan(StorageProxy.java:1083)
  at org.apache.cassandra.thrift.CassandraServer.get_indexed_slices(CassandraServer.java:746)
  at com.datastax.bdp.hadoop.cfs.CassandraFileSystemThriftStore.listSubPaths(CassandraFileSystemThriftStore.java:1120)
  ... 8 more