12 October 2015
0 CommentsAs the tools we use become increasingly more powerful and extendable, with it comes the temptation to over exploit their functionality. This isn’t always a bad thing, but sometimes we get lured into using tools for purposes in which they were never intended. This can have adverse effects in the long term. Let’s look at those in turn:
Vendor lock-in
Upgrade nightmares
Increased tool load
As hard as we try, oftentimes we cant avoid having to use certain tools. Whether its because we don’t have the time or skills to build our own; sometimes we just have to buy. My first suggestion is whenever you adopt a new tool you should always think about an exit strategy. I’ve seen companies stuck in bed with vendors who’s license costs rise exponentially on an annual basis, and they find themselves debating whether they should use their £million maintenance money to move off a tool/platform. Mainframe anyone?? Don’t let a tactical decision end up as your strategic solution!
When you leverage a tool for functionality outside of the norm, you run the risk of those features being removed at any time. I once had a scenario where some colleagues had extended Jenkins to create a web form which would post rest calls to another back-end system. They did this through the use of a plugin that had a fairly small community for support. The problem here is when we needed to upgrade Jenkins to get the latest patch fix, this plugin was still lagging behind and had broken with the upgrade. In this instance a simple X on rails framework would have done the job and broken any dependencies on Jenkins.
Increasing tool load. Lets go back to the Jenkins example above. If we have thousands of people filling in this web form and submitting to backend systems, then we are placing additional load on Jenkins. Fact. Jenkins is a build system, and can often be fairly critical to delivery. If we’re placing unplanned load on the service and bring it down, then we run the risk of hindering other users who may choose to use the tool also.
When deciding to use a tool for a certain problem, check to see if it belongs to that problem domain. For example Jira is a task management tool, and Confluence is a wiki used for knowledge share. So, why would you store documents in Jira? Equally, Jenkins is for building and deploying software, keep it inside that domain and you will make your life easier in the long run.
We’re all guilty of wanting to keep up with the unicorns and play with the latests tools, but lets face it, we don’t have their budgets or resources. If we invest in a tool for a new project without experimenting first we could find ourself running out of time and unable to roll back. My advice is stick to what you know unless you have an understanding delivery manager.
Engineer: An engineer is a professional practitioner of engineering, concerned with applying scientific knowledge, mathematics, and ingenuity to develop solutions for technical, societal and commercial problems. Enough said!
18 August 2015
0 CommentsThese days we take dependency management for granted. We simply specify the dependencies we want, and our build tool does the rest. I’m as guilty as the next person of not investing the authenticity of my dependencies so long as my code works. This naivety is riddled with risks to your business, or worse, anyone who chooses to use your code.
I’m developing a super awesome framework that I eventually want everyone to use. I add a few dependencies and my code works perfectly. I publish the binary on Maven Central and my code is downloaded 100million times in the first 10 minutes. Awesome!
This looks harmless on the face of it. I would say most people work in this way, right? As a simple exercise to demonstrate just how risky this is, I want you to go to Maven Central and search for the most popular Java test framework JUnit. I just did it and got 399 entries returned. Which one do I chose? am I using the actual JUnit? The second problem is that adding a dependency tells us nothing about how much care those developers were taking from a security perspective. It could be riddled with security holes. Finally, and potentially the most dangerous… I don’t know what dependencies JUnit is also pulling in. These transitive dependencies could also be victims of all the above.
Dependency Check is a tool from the OWASP team to check the dependencies you are using for know risks. The core engine contains a series of analyzers that inspect the project dependencies, collect pieces of information about the dependencies (referred to as evidence within the tool). The evidence is then used to identify the Common Platform Enumeration (CPE) for the given dependency. If a CPE is identified, a listing of associated Common Vulnerability and Exposure (CVE) entries are listed in a report.
Once you have this information available you can make a more informed decision about the dependencies you want to carry on using.
I was so impressed with this tool that I decided to work on the Dependency Check Gradle Plugin. You can find this plugin in the Gradle Plugin portal, so feel free to give it a try and provide feedback.
23 June 2015
0 CommentsThis tutorial was prompted by recent blog posts; Tor for technologists and How to Route Traffic through a Tor Docker container.
The idea here is that we use a docker container to run the Tor client, then using Chrome and Proxy Switch Omega we can switch between secure and non-secure browsing easily.
Step 1: Run Nagev’s Tor container $ docker run -d --name tor_instance -p 9150:9150 nagev/tor
Step 2: Install Proxy Switch Omega in your Chrome browser.
Step 3: Follow the tutorial for an explanation on how to create your own profile. Click "+ new profile", set name to "Tor" and check type is set to "Proxy Profile".
Step 4: In the Tor profile settings screen set protocol to SOCKS5, then insert the IP of your docker container and the port you forwarded. Note* you may need to run $ boot2docker ip
to get the IP if using boot2docker.
Step 5: Switch the profile to Tor using the button on toolbar. Press the button and click "Tor" from the dropdown.
Step 6: Check its all working https://check.torproject.org/
24 May 2015
0 CommentsWhen adding mobile support to your application you want to quickly see what it looks like on multiple devices. While we can use a real mobile device to test things out, this process can be quite cumbersome and time consuming. It’s best to find a plugin for our browser in order to switch the user agent settings automatically.
Whilst this is by no means the only solution, I have found it to be the easiest. User-Agent Switcher is an extension for the Chrome browser which lets you easily switch a browser to a mobile device.
26 April 2015
0 CommentsIf you read my series titled Vagrant, Amazon EC2, Docker and Microservices, you may have got to the end and shuddered. If you haven’t read it, let me explain; I was compiling the microservice on the desktop then using Vagrant to copy the package and the Dockerfile to the cloud instance to perform the docker build. From there I performed a docker run and tested the microservice. Quite clunky and slow - especially the copy accross.
After some thought, and a better understanding of Docker I settled on the following workflow which I uploaded to www.slideshare.net.
In this slideshow you will see much more thought has gone into the process. We begin in a development environment where a developer can build and run his docker image until he/she is satisfied. Once happy they push to their version control system of choice. At this point the build machine fires into action. It should validate the code and build the image. You may run a series of tests against the container at this point. Once satisfied you would push to your Docker repository (I use Bintray, but you can use DockerHub). The image is then ready to be consumed in other environments.
23 April 2015
0 CommentsI had an issue today where I was working with Jenkins and my release package was given the same name as the Jenkins job. For those of you familiar with Jenkins and the Git plugin, you will know that the workspace is given the same name as the job name, and the source is downloaded into the workspace. What you may not have known is that Gradle infers the name of the project from the root dir name.
To overcome this issue its a good idea to set up a settings.gradle file to honour your projects name. Create settings.gradle
in the root directory.
rootProject.name = 'myCoolProjectName'
This will also protect you in the open source world with Git where some users may choose to clone into a repository with a different name to what you originally set.
You may also wish to do this with your sub projects too. I find it makes working in an IDE more pleasant.
rootProject.name = 'myCoolProjectName'
findProject(':a-long-web-dir-name').name = 'web'
findProject(':a-long-api-dir-name').name = 'api'
20 April 2015
0 CommentsThis filter picks up any items that slip through the built in filter.
assignee = currentUser() AND status != Closed ORDER BY Rank ASC
Gives you a list of tickets you have beed an assignee on since the start of the month.
assignee was currentUser() after startOfMonth()
project = <project_name> AND issuetype = Bug AND resolved >= -7d
17 March 2015
0 CommentsFor this tutorial I’m going to use a spring boot application that will help us prove the concepts behind this tutorial. There are loads of microservice frameworks to chose from, but for this tutorial we will use Spring Boot. Maybe in the future I will look at trying out some other popular frameworks. Links to the Spring guide are given at the bottom of this tutorial.
At this point we could cheat and do everything in Gradle. Here’s a handy tutorial showing how to perform your build and docker run using Gradle.
When I first started writing this article I had plans of going through a 101 for Docker. Since then I have found a fantastic youtube video called Docker 101: Dockerizing Your Infrastructure and I just don’t think I can add anything to that excellent tutorial. So, rather than going from the foundations I have decided to assume you have watched that video.
Now that we can do the basics with Docker lets start setting our requirements for the container. First, we need java installed so that we can start our microservice. Then, it would be good if we could do some monitoring once the service is started. This is especially important if we plan on running this in a production setting. For that reason we’re going to move away from ubuntu images, and use an extended version of the baseimage image by Phusion. The image is called flurdy/oracle-java7 and if we look at the Dockerfile for flurdy/oracle-java7
we can see this image is using phusion/baseimage:latest
and bootstrapping it with Java.
Lets start building our Dockerfile
. I have created mine in ${projectDir}/app/docker
:
FROM flurdy/oracle-java7:latest MAINTAINER willis7 EXPOSE 8080 ADD build/libs/gs-spring-boot-0.1.0.jar /opt/msvc/gs-spring-boot.jar CMD java -jar /opt/msvc/gs-spring-boot.jar
With the FROM keyword Docker will first look for the image locally, then look to the public repo if it doesn’t find it.
MAINTAINER simply tells the reader who the author of this Dockerfile is.
EXPOSE tells Docker that a port is to be exposed when the container is started. Now, lets add our build output to the container; we do that using the ADD keyword:
The ADD instruction basically takes a <src> and <dest>. If the <dest> path doesn’t exist, it is created along with the missing directories along its path.
Finally, we need to tell Docker what command should be run when the container is executed. We do that with the CMD keyword.
And thats it!
If you look in my github repository you will find 2 helper scripts - one to build the image and the second to run it.
docker-build.sh docker-run.sh
Lets run through the steps:
# Build the source code and run unit tests
$ cd app
$ .gradlew clean build
# Create and provision a VM
$ cd ..
$ vagrant up
# ssh into the box
$ vagrant ssh
# cd to the docker scripts
$ cd /vagrant/docker/
# docker build
$ . docker-build.sh
# docker run
$ . docker-run.sh
# test service running
$ curl localhost:8080
Greetings from Spring Boot!
At this point we could get Ansible to do all of these steps, but as this is a learning exercise its nice to go through the motions. If you were to add these steps, then we refactored earlier, so these would sit nicely in the Docker Playbook.
This tutorial series has been a whistle stop tour through a few different tools and technologies. Whilst the example was extremely simple, hopefully you can see how this could be applied to a Continuous Delivery pipeline for on demand test environments. For the small effort up front in writing your scripts (which in this case were super simple!), you can save costs from not having "always on" environments that aren’t being used. Also, because we provisioned the environments using a coded format, we can rest assured the environments are the same every time - we also get the luxury of being able to version control our scripts as a result.
If you have any questions or would like me to clarify anything in these tutorials then please feel free to add a comment below.
Resources:
16 March 2015
0 CommentsAfter thinking about the hack I put in to get Puppet installed on the box before I could use it, I felt a little dirty, and decided that maybe Puppet wasn’t the best decision after-all.
So, the problem is that I need an agent installed before I can provision my box, but I’m trying to automate the provisioning - catch 22. Here’s where Ansible has really stepped up.
I suppose we could have continued using the shell provisioner, but Adam Brett raises a good point on this:
"Why not just use Bash scripts, then? Ansible has an edge over Bash scripts because of its simplicity. Ansible just uses a list of tasks to run in YAML2 format. Ansible also comes with idempotency out of the box. That means you can run the same operation numerous times, and the output will remain consistent (i.e. it won’t do something twice unless you ask it to). You can write Bash scripts this way, but it requires quite a bit more overhead."
There is one small caveat with Ansible - we have to install it on the machines that will be running the Vagrant script (Ansible call it the control machine). I’ve added a link to the docs down in the resources section.
Puppet has manifests, Chef has cookbooks and Ansible has playbooks.
As we did with Puppet, lets create an Ansible dev environment:
# create a dir for ansible scripts from project root
$ mkdir playbooks
# change directory
$ cd playbooks/
# create a playbook file for vagrant
$ touch playbook.yml
The tasks are the same as before, and in the playbook.yml
we express them in the following way:
---
- hosts: all
sudo: true
tasks:
- name: update apt cache
apt: update_cache=yes
- name: install docker.io
apt: name=docker.io state=present
Now we need to tell Vagrant that we want to use the Ansible provisioner. Replace the previous provisioner blocks with the following:
config.vm.provision :ansible do |ansible|
ansible.playbook = "playbooks/playbook.yml"
end
You can now run vagrant up --provider=aws
and you should see Ansible being used for the provisioning:
PLAY [all] ********************************************************************
GATHERING FACTS ***************************************************************
ok: [default]
TASK: [update apt cache] ******************************************************
ok: [default]
TASK: [install docker.io] *****************************************************
changed: [default]
PLAY RECAP ********************************************************************
default : ok=3 changed=1 unreachable=0 failed=0
Again, we can ssh into the box and prove Docker is installed:
# ssh into our instance
vagrant ssh
# run docker
sudo docker run -i -t ubuntu /bin/bash
At the moment our playbook is really simple, so you could argue its not worth refactoring, but as this is a learning exercise I think its worth going through the motions.
The Wikipedia definition goes as follows:
"In computer science, separation of concerns (SoC) is a design principle for separating a computer program into distinct sections, such that each section addresses a separate concern. A concern is a set of information that affects the code of a computer program."
The Docker installation task is a nice place to add some separation. Whilst the task is very simple at the moment, it could get more complex over time. Keeping it separate makes the code easier to read, but also sets us up to be able to reuse in the future.
Start by adding a new directory called tasks
and then add a file for docker, docker.yml
:
---
- name: install docker.io
apt: name=docker.io state=present
Note
|
Don’t get caught out by whitespaces. They will fail your build. |
Now, we need to update our playbook to include this new file. Change your playbook.yml
file to match the following:
---
- hosts: all
sudo: true
tasks:
- name: update apt cache
apt: update_cache=yes
- include: tasks/docker.yml
Test this new configuration by running vagrant destroy
followed by vagrant up --provider=aws
again. Everything should work exactly as before.
Resources:
10 March 2015
0 CommentsIf you have ever imported another build using apply from: "${rootDir}/gradle/publish.gradle"
then you will appreciate how its a little difficult to know exactly what has been applied to your build by said apply
action.
I often use this pattern when I want to clearly seperate the parts of my build. In my build scripts you may see something like:
apply from: "${rootDir}/gradle/sonar.gradle"
apply from: "${rootDir}/gradle/acceptance-testing.gradle"
apply from: "${rootDir}/gradle/deploy.gradle"
apply from: "${rootDir}/gradle/publish.gradle"
This is very clear and works very well, but sometimes I just don’t need to be notified of all the tasks a build file imports.
In those cases a nicer solution may be to use the GradleBuild task type as shown below.
task publish(type: GradleBuild) {
buildFile = "${rootDir}/gradle/publish.gradle"
tasks = ['publishGhPages']
}
I think this is really clear, and if you run gradle tasks
you should find all other tasks from that build file ommitted.
10 March 2015
0 CommentsI was just watching a PuppetLabsInc video on YouTube which had Michael Stankhe presenting "Getting Started with Puppet". Its a great presentation and really thought provoking, but the bit I like most is what he said about patterns.
"I use [the term] patterns, I dont use the words best practice because it implies I know all possible options and all your variables. I dont! Patterns are generally good ideas - sometimes they wont work for you, you may be in a situation where one of these patterns fails miserably."
09 March 2015
0 CommentsIn the first part of this tutorial, we showed how to use Vagrant to automate and manage an Amazon EC2 instance. We defined a simple Vagrantfile to specify certain attributes for an instance to run, and got it running using Vagrant’s command line tools. In this part of the tutorial, we’ll be using Puppet to define and automate the configuration details for our instance. This way, whenever we start up the environment with vagrant up
, it will be set up to run Docker without any additional manual configuration.
The documentation for Docker is very good. Let’s use that to drive the requirements of our puppet scripts:
Lets start by setting up our puppet dev environment:
# create a dir for puppet scripts from project root
$ mkdir manifests
# change directory
$ cd manifests/
# create a default manifest file for vagrant
$ touch default.pp
First thing we will want to do on our newly created instance is ensure the apt-get
package database is up to date. This can be achieved with the following block:
exec { "apt-get update":
path => "/usr/bin",
}
Once thats complete we will want to install the docker package:
package { "docker.io":
ensure => present,
require => Exec["apt-get update"],
}
Note
|
here we have built a dependency in Puppet. We are saying we don’t want to continue with this task until the execution of apt-get update is complete. |
At this point we have a bit of a chicken and egg situation. We want to run Puppet on our box to provision it, but Puppet isn’t currently installed. We can use the shell provisioner to solve that initial problem.
config.vm.provision :shell do |shell|
shell.inline = "sudo apt-get install -y puppet-common"
end
Note
|
the -y is a nice trick which we use to force a yes when prompted if we want to continue.
|
Since we have placed our puppet script in the default location all we need to do is add the following line to the Vagrantfile.
config.vm.provision :puppet
And thats it!! We can test this works by running vagrant ssh
followed by a sudo docker run -i -t ubuntu /bin/bash
. If all is well then you should see something similar to:
Unable to find image 'ubuntu' locally
Pulling repository ubuntu
2d24f826cb16: Download complete
511136ea3c5a: Download complete
fa4fd76b09ce: Download complete
1c8294cc5160: Download complete
117ee323aaa9: Download complete
Resources:
09 March 2015
0 CommentsMicroservices are all the rage at the moment, but from my experience they just move the bottleneck. Yes, the speed of development increases massively, but it does so at the cost of an increased dependency on the Build and Ops guys.
This blog series is about using Docker to run a complete and fully functional microservice in the cloud using Vagrant, Amazon AWS and Docker. The goals are as follows:
Provisioning of the EC2 instance should be automated
The microservices should run in their own containers
The setup and configuration of the containers should be fully automated, no manual steps required
Capture everything in a GitHub project.
The idea here is if we can automate the whole process, then we will quickly see the real benefits of using a microservice based architecture.
Lets introduce the tools…
Vagrant is a nice way to manage our EC2 instances. We can use Vagrant to create and instance and provision the box to a state we desire.
Once Vagrant is installed on your dev machine, to use the AWS
provider type in Vagrant we will need to install the Vagrant AWS plugin. That can be done with the following command:
vagrant plugin install vagrant-aws
Note
|
This took a while on my machine without a great deal of feedback. Just be patient, it will finish eventually. |
So, now lets create our project and the Vagrantfile. Run the following commands:
# create a project folder
$ mkdir infra-n-app-automation
# change directory
$ cd infra-n-app-automation
# create the vagrant file
$ vagrant init
You should get a message something similar to:
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`vagrantup.com` for more information on using Vagrant.
If we follow the Vagrant AWS plugin docs, we can see the basic Vagrantfile should look as follows:
Vagrant.configure("2") do |config|
config.vm.box = "dummy"
config.vm.provider :aws do |aws, override|
aws.access_key_id = "YOUR KEY"
aws.secret_access_key = "YOUR SECRET KEY"
aws.session_token = "SESSION TOKEN"
aws.keypair_name = "KEYPAIR NAME"
aws.ami = "ami-7747d01e"
override.ssh.username = "ubuntu"
override.ssh.private_key_path = "PATH TO YOUR PRIVATE KEY"
end
end
The guide suggests putting your access_key_id
and secret_access_key
in the Vagrantfile, which is fine if you have a private repository, but as I plan on making this public I will set them using environment variables.
export AWS_ACCESS_KEY="AKXXXXXXXXXXXXXXX"
export AWS_SECRET_KEY="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
After configuring my Security Group, selecting an AMI and sorting my private key, my Vagrantfile now looks like:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2) do |config|
# Box configuration
config.vm.box = "dummy"
config.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box"
# Share an additional folder to the guest VM.
# config.vm.synced_folder "../data", "/vagrant_data"
# Provider
config.vm.provider :aws do |aws, override|
aws.keypair_name = "dev"
override.ssh.username = "ubuntu"
override.ssh.private_key_path = "~/.ssh/dev.pem"
aws.ami = "ami-234ecc54" #Ubuntu 14.04.1 LTS
aws.region = "eu-west-1"
aws.instance_type = "t2.micro"
aws.security_groups = ["WebServerSG"]
aws.tags = {
'Name' => 'Vagrant'
}
end
# Provisioning
end
We can now start the VM using the command:
vagrant up --provider=aws
and once the instance is available we can connect using:
vagrant ssh
finally, when we want to stop the instance we can run:
vagrant halt
I had a few challenges when I first started with Vagrant and AWS. They were:
Security Groups - You will noticed I used the WebServerSG
group. I set this up as per the Amazon documentation found here. Until I made this change I was hanging at the "Waiting for SSH to become available…" stage.
AMI - If you want to use the free tier I would recommend going through the Amazon "Launch Instance" wizard and recording the AMI id for your region and price plan. I found some of the online examples simply didn’t exist and were region specific.
This concludes part one of the tutorial. We can now create and control the lifecycle of an EC2 instance, and in part two we will install Docker and any other dependencies.
The source can be found in the repository below: https://github.com/willis7/infra-n-app-automation
Be very careful with your Amazon details on the web. I have provided a solution above for removing them from your source code. For a more in depth example see, here.Dont end up like this poor fella: My $500 Cloud Security Screwup
08 March 2015
0 Comments07 March 2015
0 CommentsDependency management has come a long way over the past 10 years, but I believe it has some way to go before we can say the problem is solved.
Consider the scenario where you have developed a library which inadvertently introduced a severe security vulnerability. Because your organisation believes in reuse it has been used in many different projects. The Maven POM (Project Object Model) does a good job in providing us with meta-data about the modules which are suppliers to a project, but it doesn’t capture information about who the consumers are.
So, we have a dangerous library in the wild, but we cant say with any certainty who is consuming it. At this point the only solution is to trawl through every projects POM and look to see if you have declared this library as a dependency. This is going to make your day very unpleasant if you have more than a hand full of projects, and if you’ve moved in the direction of microservices then this is going to be hell!
As I mentioned earlier a Maven POM provides us with a way of describing what dependencies a project has. These are identified using a standard set of attributes; groupId, artifactId and version. There are other attributes, but we will ignore them for now.
groupId - a macro group or family of projects or archives to which a project belongs. For example, org.hibernate
and org.richfaces.ui
artifactId - the unique identifier of the project among the projects sharing the same groupId
. For example, junit
, hibernate-annotations
, and richfaces-components-ui
.
version - a version number.
Lets turn our dependency tree into a graphical representation:
Wouldn’t it be good if we could store all of these project graphs in a single location where they could establish relationships with other projects?
We’re already talking about graphs, so wouldn’t a graph database be a good place to start?
The underlying data model of a graph database is whats called the Property Graph data model. Essentially, it means that we will be storing our data in a graph database, and that we will be using vertices and edges(or nodes and relationships) to persist our data.
This works really well for the problem we’re trying to solve because we’re talking about artifacts (nodes), and their relationships with each other artifacts. Lets look at some code:
@NodeEntity
public class Artifact {
@GraphId Long id
String groupId
String artifactId
String version
@RelatedTo(type = "DEPENDS_ON", direction = Direction.OUTGOING)
public @Fetch Set<Artifact> dependencies
public void dependsOn(Artifact artifact) {
if ( !dependencies ) {
dependencies == new HashSet<Artifact>()
}
dependencies.add(artifact)
}
}
Note
|
I’m using Spring heavily here - this may look unfamiliar if you don’t know Spring. |
Here you can see I’ve constructed the node using the attributes I defined earlier. There is an id which is annotated with @GraphId
that Neo4j uses to track the data, a groupId, artifactId and version. Inside this node entity I have also defined a Set<Artifact>
of dependencies marked up as @RelatedTo
. This means that every member of this set is expected to also exist as a separate Artifact
node, and this node DEPENDS_ON
them.
So, this concludes part 1. Pt2 coming soon..
17 February 2015
0 Comments$ brew install mongodb
$ mkdir -p /data/db
Ensure that user account running mongod has correct permissions for the directory:
$ sudo chmod 0755 /data/db
$ sudo chown $USER /data/db
$ mongod
Note: If you get something like this:
exception in initAndListen: 10309 Unable to create/open lock file: /data/db/mongod.lock errno:13 Permission denied Is a mongod instance already running?, terminating
It means that /data/db
lacks required permission and ownership.
Run ls -ld /data/db/
Output should look like this (willis7
is directory owner and staff
is group to which willis7 belongs):
drwxr-xr-x 7 willis7 staff 238 Aug 5 11:07 /data/db/
17 February 2015
0 CommentsReading a Maven POM is Easy with Gradle and Groovy!
The inspiration for this post came from the post here: Reading info from existing pom.xml file using Gradle?
Naively I implemented the first solution which is given below.
defaultTasks 'hello'
repositories {
mavenCentral()
}
configurations {
mavenAntTasks
}
dependencies {
mavenAntTasks 'org.apache.maven:maven-ant-tasks:2.1.3'
}
task hello << {
ant.taskdef(
resource: 'org/apache/maven/artifact/ant/antlib.xml',
uri: 'antlib:org.apache.maven.artifact.ant',
classpath: configurations.mavenAntTasks.asPath)
ant.'antlib:org.apache.maven.artifact.ant:pom'(id:'mypom', file:'pom.xml')
println ant.references['mypom'].version
}
Now, this solution did meet the original posters requirement. However, after running the hello task I was surprised to see a few libs being downloaded which didn’t feel slick.
In true groovy fashion this could be achieved much more simply using the code below:
def pom = new XmlSlurper().parse(new File('pom.xml'))
println 'my pom version ' + pom.version
16 January 2015
0 CommentsOk, so this one had me stumped for a while and the solution was extremely simple.
I read lots of information in the Gradle forums on this and it sent me in the wrong direction. Loads of articles saying to set certain flags/GRADLE_OPTS, which isnt necessary.
So, in Intellij (im using version 14), set your breakpoint and from the Gradle Tool Window in all tasks area, right click the task and select the debug option from the context menu.
Voila!
31 December 2014
0 CommentsSo, its the last day of 2014 and I’ve been thinking about some of the things I’ve achieved, and gave some thought to what I would like to achieve in 2015. It’s a well known fact that information changes, so to stay current any self respecting IT professional should be constantly aspiring to learn new things, or better understand what they know already.
Here’s my shortlist of subjects I would like to enhance my knowledge of in the new year:
White hat hacking & Penetration testing
Sysadmin
Meta programming
Work through a few Google recommended subjects
Lets discuss them in turn:
We all know how to test our code with unit, integration and exploratory testing. However, most people rarely pay much attention to the security elements. Ok, so most people can run static analysis with tools such as OWASP, but that still leaves the dynamic elements vulnerable. My aim is to understand some of the common ways of hacking software, and then use that knowledge to better inform some design and implementation patterns in my code.
I can navigate my way through a *nix system fairly easily, but I wouldn’t say im a pro. In fact, if someone took Google away I would be fairly stuck. My goal is to learn 15 of the most common commands and practice using them on a regular basis. This is very much how I learnt to use Git proficiently.
An interesting concept that I’ve hear used a lot, but not ever taken a deep dive into. I’m aware of its benefits in creating DSL’s, so I would like to spend some time working with this cool feature. As I’m still having a lot of fun with Groovy I suspect that will be the language I use to explore Meta Programming.
I’m particularly interesting in the discreet mathematics. As a student I didn’t really enjoy maths, but as ive grown older (and wiser) I’ve gained an appreciation for maths. I guess I’ve come to accept that without maths, we probably wouldn’t have IT as we know it today. Its only fair that I give it the respect it deserves.
So, thats my list of things I will be playing around with in 2015. I will try to blog what I’ve learnt and how I’ve understood the subject, and hopefully share my experience with other like-minded people.
30 December 2014
0 CommentsFirst of all a great thanks to Mac for steering me in the direction of JBake. His blog inspired me to spend the time and implement my own version.
I’m not going to spend a long time on this because there is a plethora of information online - rather I will add tips here and links to the resources I used.
Here’s a list of tools you will need to follow along:
Use GVM to install Gradle and JBake latest. This will save you a lot of manual installation. Sorry Windows people, but you will have to install manually unless you use Cygwin and GVM.
Follow the instructions on the Git website to install Git.
Tip
|
Resource
Github - Creating Project Pages manually
|
Create a new repository on GitHub - I called mine blog.
Make a fresh clone
Create a gh-pages
branch - this must be an orphaned branch. Follow the steps in the link above.
Tip
|
Resource
Authoring your blog on GitHub with JBake and Gradle
|
Note
|
As of writing this blog the publish task is now renamed to publishGhPages
|
To finish off this project I followed Cédric’s tutorial shown in the link above. However, rather than have 2 separate build files I opted for a single build script. I didn’t notice any of the classpath issues that Cédric has raised in his post.
My build file:
buildscript {
repositories {
jcenter()
}
dependencies {
classpath 'me.champeau.gradle:jbake-gradle-plugin:0.2'
// optional, if you use asciidoctor markup
classpath 'org.asciidoctor:asciidoctor-java-integration:0.1.4'
// optional, if you use freemarker template engine
classpath 'org.freemarker:freemarker:2.3.19'
classpath 'org.ajoberstar:gradle-git:0.12.0'
}
}
apply plugin: 'me.champeau.jbake'
apply plugin: 'org.ajoberstar.github-pages'
githubPages {
repoUri = 'git@github.com:willis7/blog.git'
pages {
from(file('build/jbake')) {
into '.'
}
}
}
Note
|
I’m using the SSH communication protocol with GitHub. I did try simple authentication, but had problems and switched to SSH. Theres a great tutorial by Atlassian that guides you through the steps. Whilst it is for Bitbucket, the steps are relevant to GitHub. |
For the amount of blogging that I do, I guess it begs the question why I would switch from blogger. Well, the truth is it was more about the challenge and the learning. When I originally opened my blog it was intended to be a personal reminder of problems I encountered and the solutions I found. If someone got benefit from them, then that’s even better.
So far I have enjoyed the flexibility brought through the use of JBake. I can play with many template engines, explore CSS and JS all within the confines of my own blog. Also, as I’m using Git if I stuff something up I can always revert back to a working version.
A pleasurable experience in all. Easy to get started too!
29 December 2014
0 CommentsI was having some real headaches with debugging my unit tests today. With the introduction of forked execution came the breakage of Intellij debugging.
There are some rather long winded ways of applying remote debuggers, and Ted does a great job of breaking the problem down and offering a solution. However, in my rather tiny Grails application I was happy to sacrifice the benefits of forked execution - at least for my tests.
So, in BuildConfig.groovy I changed the following block:
grails.project.fork = [
test: [maxMemory: 768, minMemory: 64, debug: false, maxPerm: 256, daemon:true],
run: [maxMemory: 768, minMemory: 64, debug: false, maxPerm: 256, forkReserve:false],
war: [maxMemory: 768, minMemory: 64, debug: false, maxPerm: 256, forkReserve:false],
console: [maxMemory: 768, minMemory: 64, debug: false, maxPerm: 256]
]
to:
grails.project.fork = [
test: false,
run: [maxMemory: 768, minMemory: 64, debug: false, maxPerm: 256, forkReserve:false],
war: [maxMemory: 768, minMemory: 64, debug: false, maxPerm: 256, forkReserve:false],
console: [maxMemory: 768, minMemory: 64, debug: false, maxPerm: 256]
]
This change tells Grails that when we run the test configuration it shouldn’t run in forked mode - this is anything we would run using the test-app Grails command.
11 December 2014
0 CommentsMethod on class [] was used outside of a Grails application. If running in the context of a test using the mocking API or bootstrap Grails correctly.
Note
|
java.lang.IllegalStateException: Method on class [com.willis.heimdall.Booking] was used outside of a Grails application. If running in the context of a test using the mocking API or bootstrap Grails correctly. at com.willis.heimdall.BookingIntegSpec.test saving a booking to the db(BookingIntegSpec.groovy:24) |
I recently had this error on one of my simple examples. A real facepalm moment when I look back in retrospect but ho hum, the fix is nice and easy.
Broken Code:
package com.willis.heimdall
import org.joda.time.DateTime
import spock.lang.Shared
import spock.lang.Specification
/**
* Integration tests for the Booking model
* @author Sion Williams
*/
class BookingIntegSpec extends Specification {
@Shared def today = new DateTime()
@Shared def todayPlusWeek = today.plusWeeks(1)
def 'test saving a booking to the db'() {
given: 'a new booking booking'
def booking = new Booking(name: 'my booking',
startTime: today.toDate(),
endTime: todayPlusWeek.toDate())
when: 'the booking is saved'
booking.save()
then: 'it can be successfully found in the database'
booking.errors.errorCount == 0
booking.id != null
Booking.get(booking.id).name == 'my booking'
Booking.get(booking.id).startTime == today.toDate()
Booking.get(booking.id).endTime == todayPlusWeek.toDate()
}
}
Fixed Code:
package com.willis.heimdall
import grails.test.mixin.TestFor
import org.joda.time.DateTime
import spock.lang.Shared
import spock.lang.Specification
/**
* Integration tests for the Booking model
* @author Sion Williams
*/
@TestFor(Booking)
class BookingIntegSpec extends Specification {
@Shared def today = new DateTime()
@Shared def todayPlusWeek = today.plusWeeks(1)
def 'test saving a booking to the db'() {
given: 'a new booking booking'
def booking = new Booking(name: 'my booking',
startTime: today.toDate(),
endTime: todayPlusWeek.toDate())
when: 'the booking is saved'
booking.save()
then: 'it can be successfully found in the database'
booking.errors.errorCount == 0
booking.id != null
Booking.get(booking.id).name == 'my booking'
Booking.get(booking.id).startTime == today.toDate()
Booking.get(booking.id).endTime == todayPlusWeek.toDate()
}
}
Note here that we have told Grails what we are testing with the @TestFor() annotation so that it can set up the relevant mocks and stubs in the background.
17 October 2013
0 CommentsThis simple snippet adds 2 additional output listeners; Standard Out and Standard Error and pipes their output to a build log.
def tstamp = new Date().format('yyyy-MM-dd_HH-mm-ss')
def buildLogDir = "${rootDir}/build/logs"
mkdir("${buildLogDir}")
def buildLog = new File("${buildLogDir}/${tstamp}_buildLog.log")
import org.gradle.logging.internal.*
System.setProperty('org.gradle.color.error', 'RED')
gradle.services.get(LoggingOutputInternal).addStandardOutputListener (new StandardOutputListener () {
void onOutput(CharSequence output) {
buildLog << output
}
})
gradle.services.get(LoggingOutputInternal).addStandardErrorListener (new StandardOutputListener () {
void onOutput(CharSequence output) {
buildLog << output
}
})
19 May 2012
0 CommentsWhen you start working with distributed domains there will come a time when you need to pack the domain and unpack it in its distributed areas.
Whether you create your domain via the GUI or by scripting, all you’re actually doing is creating a series of configuration files. At this point you’re not actually starting any servers - that comes later.
Lets consider the following architecture:
AdminServer = Machine A Managed01 = Machine B Managed02 = Machine C Cluster01 = Managed01, Managed02
So, you run through the wizard and configured the domain above. You should now notice your domain has been created on Machine A, but if you log into Machine B or C nothing exists. This is where the need to Pack and UnPack comes in.
To pack the domain run the following WLST script:
# Create a template .jar of an existing domain
# Open an existing domain
readDomain(domainDirName)
# Write the domain configuration information to a domain template
writeTemplate(templateName)
closeDomain(templateName)
This script opens the domain and extracts (as a jar) the configurations required for the servers that will reside on Machines B and C. It’s a skeleton configuration because the Admin server information will be excluded - a domain only ever has 1 Admin server.
Now that we have a templateName.jar we can send it to the machines that the rest of the domain will reside on and run the unpack script on each machine:
# unpack.py: convert from unpack command to wlst script
# This script shows how to convert from the unpack command to a wlst script.
# Note that the domain and template values, and the options to setOption, must be single-quoted
# Specify the template that you want to use
readTemplate('c:\wls9\user_templates\wlst_wls_template.jar')
# If you specified the -username and -password option in the unpack command,
# Specify them here. Otherwise, delete these lines`
# Note that the domain_name field here is just the name of the domain, not the full path as specified in writeDomain below
cd ('/Security/<domain-name>')
create (<user_name>,'User')
cd ('User/<user_name>')
set ('Password',<password>)
# analogous to unpack -java_home
setOption('JavaHome',<java_home>)
# analogous to unpack -server_start_mode
setOption('ServerStartMode',<server_start_mode>)
# analogous to unpack -app_dir
setOption('AppDir',<app_dir>)
# write the domain
writeDomain(<domain>)
closeTemplate()
Older posts are available in the archive.