Tuesday, November 24, 2009

Gander at Greensmart: The Other Wicket Web App

For this week in my Software Engineering class, after the first implementation of Wicket applications I was assigned to evaluate Greensmart's Wicket Application.

The details for the assignment were much like reviewing the WattDepotCLI's in my previous post, but a few modifications were made to the review criteria. The review criteria for this assignment can be found here.

Click here to go to Greensmart's GoogleCode Page
My full review of the system in PDF format can be downloaded here

My overall thoughts:
Overall I think the project fulfills the assignment on the basic of levels. It does take in a date and outputs the Carbon Intensity throughout the day. But, without ever reading the homepage of their GoogleCode site, it’s hard to make out the purpose of the application. These are all just minor things that can be fixed given some more time.

The source code itself could use more descriptive JavaDocs and more in-line comments explaining implementation details. A few tweaks as to a more descriptive web application is also in order. They might also want to work on getting colors in the cells rather than having text. One last thing about the interface is that it should provide an overall conclusion based on the data being displayed (i.e. Carbon usage is currently high, hold off on extensive wattage for X hours…just something to think about).

The group appears to be working functionally, judging from the Hackystat sensor logs I could see certain people doing beginning tasks, then towards the end, the other members took over.

Monday, November 23, 2009

Tag Teaming Wicket and WattDepot: Front End Web Apps

For this week in my Software Engineering class, we were to combine the groups we had for the WattDepotCLI with another and design a front-end web application for the WattDepot service which emulated the Ecotricity UK Grid Live

Our the most current distribution can be downloaded at the Greenometer GoogleCode site.

The type of web application we were instructed to code in was using Wicket. Wicket is also an open-source technology made by Apache. It was chosen because of the time frame we were to develop our project, and because of its ease of integration with Java. It took a while to get used to Wickets coding technique. Reading the book was semi-helpful, most most insight came from the ICS Wicket Examples. From these two resources I gained a basic understanding of how Wicket interacts with Java.

Creating a project as a duo seemed to be hard enough to deal with work delegation. In this assignment, two groups were combined to make a 4-man team. At first we didn't know what to do, at the first workday of the assignment I had managed to do the initial set up with GoogleCode, GoogleGroups, Hudson, and Hackystat. But after that it was a toss up. Luckily one of my group members, Kelvin Green stepped up and took charge. He pretty much set the ground work for the whole project. The communication thrived within the group. We talked through AIM, exchanged e-mails regularly, and asked questions when we needed clarification.

The overall design of the system I think turned out pretty organized. We used the GoogleChart system to create a nice breakdown of how the carbon intensity was throughout a specific day. The form fields in on the page were broken up into 3 sections to avoid most formatting errors, and any invalid input can be easily isolated. One thing that bothered me was we didn't output a specific number breakdown of the lbs of CO2 / MWh. The graph looked nice, but it only gives an estimate on the current carbon intensity. However, would the average consumer understand the specifics or just the color of green, yellow, and red when to use certain appliances.

For this project we also used the Hackystat Software ICU. Work had already commenced before the sensors were fully installed properly, so some data may not depict an accurate progression of the project.

Here's a screen capture of our Vital Signs for our Project covering the past week:
11 / 16 to 11 / 23


Vital Sign Analysis:
Coverage: The coverage fluctuated throughout the week, but it pretty much stayed at 85%. Aside from a few test cases which ran all the methods, coverage overtime was pretty flat.
Complexity: We can see immediately that there is a rising "red" trend probably due to the amount of code that's compact into each method. Though the trend seems to not look good, the immediate value is green showing that the system as is not that complex.
Coupling: Again here, the same argument for Complexity can be applied. Since most of the methods are found in the application java file, there's increased coupling because that 1 application java file uses all the other java files.
Churn: This is one of the sensors that didn't get installed correctly and was corrected sometime after development had commenced. Although the churn is currently red at the moment, the trend seems to be decreasing.
DevTime: There's a high spike in development time, as this was another sensor that didn't install correctly. The high spike can be best explained by the rush to get everything done by the due date.

Overall, the system is pretty unique on it's own compared to other systems. From this experience, the most challenging factor is having someone step up and do the initial tasks to get everything set-up. Once that's done, there's an even harder task of who is to do what. Task delegation is the hardest part, it seems as though there's always someone that works the most and someone who does the least.

What's nice about having someone delegate tasks is that everyone knows what they're doing, but that person in charge will have to go the extra mile of organizing. I have yet to be in a group where everyone is at equal level and knows that the task at hand is, but even then, I think splitting the workload is a task in itself.

Monday, November 16, 2009

Enter WattDepotCLI Branch Umi v2.0-ish

WattDepotCLI Branch Umi v.2 can be downloaded here.

After reading through all the reviews/comments made on Branch Umi, it was time to put those review to good use and polish up our system. This assignment has been like no other in my previous ICS classes. The major difference being working with a partner. I know I could've sworn I wish I could work with a partner in the lower entry Java classes. It's not until now I can see the advantages and disadvantages of doing so. Using SE tools such as Ant, SVN, and automated quality assurance tools did make the sharing process easier, but the one element that cannot be done by a computer is the cooperation and output done by another person. What I mean by that can be said as "Two heads are better than one," throughout this WattDepot experience sometimes this was true and other times it wasn't.

For the last stages of version 2 implementation, we were introduced to the Hackystat Project, which is sort of a Software ICU (Intensive Care Unit), which monitors our WattDepot projects "vital signs", such as Commits, Lines of Code, complexity, coupling, etc. Hudson just barely scrapes the idea of this and Hackystat takes project monitoring to a whole new level.

The initial implementation of version 1 of Branch Umi satisfied the functionality requirements set for v1.0, but a lot of the reviews that were done said we now need to focus on creating a high quality design complete with test cases and separate packages for the classes. The system does not implement all command set by the specifications of version 2.0, the command that was not implemented was the last command 2.13 carboncontent. My partner and I decided to forgo 2.13 and focus on quality with what we have. The test cases that we have only check that each command outputs known values, we did not test for invalid inputs. The quality of our system could have been better through test cases, but we have nothing to show for it. I tried to delegate this task to my partner, but I ended up doing some of the basic tests. With the test cases that we do have, Emma reports:
  • class: 86% (18/21)
  • method: 84% (38/45)
  • block: 55% (2185/3980)
  • line: 61% (464.3/755)
The coverage on the surface looks pretty good. Most of the methods and classes have been used throughout the tests, but once you look at block and line portions of the report, we can see that roughly half of all code we implemented is being tested. So even though the first few lines seem okay, when it comes down to the last 2 it shows the "real" extent of the tests.

As far as group process, I am a little disappointed we didn't meet more often. There was also the problem of what do we actually do when we did meet. Obviously the task at hand was to complete the project, but how do we go about splitting it up. My partner seemed like he just wanted to be told what to do than actually taking initiative in doing things. Which is fine by me except when the person doesn't put out... I feel as though if I were to take more initiative and really did tell him what to do I'd feel bossy, which I don't, but I guess with what's at stake it can't be helped, but I digress.

The Hackystat sensors we installed in our project was something that I thought was interesting and cool. Seeing the colored bars on the progress of our project really adds perspective in the health of our project. We can also see where our project is heading, whether the changes we make are for the better or actually worsen the condition. It sure beats looking at Sunny/Cloudy/Thunderstorm icon in Hudson, and provides more meaningful information both currently and in trend.
Here's a screen shot of that latest Hackystat Analysis for Branch Umi:

(You'll have to click it to get a larger zoom)

We can see that there's a mixture of red and greed. The coverage from our test cases has been steadily increasing, but the number 62.0 is yellow meaning the coverage is about average, but not that great. Complexity is surprisingly high, but this is probably due to the large amount of if-statements used to check the form of each command. Coupling looks great, this is due to the splitting of each command into their own Java class. It took awhile to get everything configured properly for Hackystat to receive sensor data both from our Command Lines and Eclipse so Hackystat could have missed about a days worth of data. So the last parts DevTime, Commit, Build and Test show some activity but it's hard to get a clear sense of where it's headed.

The last part of the assignment we were to answer a few questions to test our functionality of our WattDepot implementation. Unfortunately we did not get to finish implementing the last command, carboncontent, so we are unable to do the last two questions.

What day and time during the month was Oahu energy usage at its highest? How many MW was this?
Command: power generated SIM_OAHU_GRID timestamp 2009-11-26T20:00:00.00-10:00
9.95E2

What day and time during the month was Oahu energy usage at its lowest? How many MW was this?
Command: power generated SIM_OAHU_GRID timestamp 2009-11-28T02:45:00.000-10:00
4.96E2

What day during the month did Oahu consume the most energy? How many MWh was this?
Command: powerstats generated SIM_OAHU_GRID day 2009-11-26 sampling-interval 60 statistic max
9.95E2

What day during the month did Oahu consume the least energy? How many MWh was this?
Command: powerstats generated SIM_OAHU_GRID day 2009-11-26 sampling-interval 60 statistic min
4.93E2

What day during the month did Oahu emit the most carbon (i.e. the "dirtiest" day)? How many lbs of carbon were emitted?

What day during the month did Oahu emit the least carbon (i.e. the "cleanest" day)? How many lbs of carbon were emitted?

In order to get the correct dates to enter, I actually had to pull up excess data and stare at a bunch of output until I've seen the appropriate number. From there, I simply recorded the timestamp and entered the corresponding command to elicit the correct output.

I've also asked another developer, Kendyll Doi, about how to approach this problem. He had an interesting idea of using the Chart command to get a general idea of peaks and valleys during each day, then using the powerstats command trying different timestamps to verify the minimum and maximum energy.

Wednesday, November 11, 2009

Outside Insight on Wattdepot-Cli Branch Umi

In my previous post I had completed reviewing two other branches of the Wattdepot-cli; projects Eha and Elima. Yesterday, my partner and I sat down and reviewed the comments that were left by those who had our branch as their assignment.

The reviewers for our branch were:

For the most part, my partner and I generally knew where are system lacked. Most of the comments centered around test cases, as we had none, package documentation, and overall design of the system. There were some specific errors that were mentioned, but those could be fixed without any major repercussions.

What we need to do is to:
  • CREATE TEST CASES!
  • Separate each method into it's own class
  • Have separate packages for processor and command
  • Add more descriptive JavaDocs and explain how each package interacts with each other
  • Re-code some lines because the Wattdepot library was recently updated

Reviewing other branches made me realize there were more efficient ways of doing things such as creating a Parent list of all sources, and when writing out to a file, how to display the absolute file path so the user knows exactly where to look to find the output file. I will end up adapting code from other branches to make our source code look more elegant and straight forward. The checklist that Prof. Johnson provided made the review experience straight forward and pleasant. Now that I have a basis for reviewing code, I know what things to spot for when reviewing future projects both mine and others.

Sunday, November 8, 2009

Review of Wattdepot-CLI Branches: Eha and Elima

Following the same cycle as we did for Robocode, after our first deployment of each branch of Wattdepot-CLI, it came time to for the review. Our assignment this week for my Software Engineering class was to review two other branches of Wattdepot-CLI. My two assigned branches were Eha and Elima.

Eha's distribution version 1.0 can be downloaded here.
Elima's distribution version 1.0 can be downloaded here.

For this assignment, Prof. Johnson gave us a checklist that we followed to review each branch. The basic run down of the review was to make sure it builds successfully passing automated quality assurance tools (PMD, Checkstyle, Findbugs, JUnit), check the functionality, source code, and overall design of the system.

Instead of posting my two reviews directly in the post, I've uploaded PDF versions which can be downloaded at the following:
I'll also summarize my thoughts on each of the branches:

Branch Eha
Eha was the first branch I reviewed. I was able to build and run all the commands. Only the "list sources" didn't work. The one thing that I disliked was the lack/random placement of error messages. Generally, if there was something wrong the the command entered, i.e. missing params, or a keyword (genereated|consumed) was misspelled, the the system would seemingly dismiss the input and bring up a blank prompt. It was good that the system could tell there was something wrong, but what would be better is if the system could respond with "what" was wrong with the entered command. Other than that they need to just split the rest of the code into their respective classes and divide them into processor and command packages and create more in-depth test cases.

Branch Elima
This branch was sort of the opposite of Eha. Although they had the correct design structure, which I failed to do in my branch, (i.e. separate classes, separate packages) not all the commands worked. I had to look at the source code before I was supposed to in the checklist. What I discovered was that although they had separate classes for each command and each contained source code and weren't emtpy, not all the commands were mapped onto the HashMap. This made it slightly irritating/cumbersome to test if the system implemented the command. I tried looking at the blogs of both developers on the branch to gain insight as to which commands weren't implemented, but neither of them mentioned that they didn't map the commands onto the HashMap. I spent a good 30 minutes trying to figure out which commands the system did accept. Overall, a well organized system that needs to be filled in with complete commands and develop test cases.

Wednesday, November 4, 2009

Programming Duo: Wattdepot CLI

For the past couple weeks in my Software Engineering class I have been working on developing a branch of Command Line Interface for the Open Source Project Wattdepot. In essence Wattdepot is a RESTful web service that collects electricity data (such as current power utilization or cumulative power utilization) from meters and stores it in a database.

I have been working with a partner, Bao Ung and together we have developed our own branch of the Wattdepot CLI.

Our project branch is designated "umi" (as our group number was ten) and our distributed system can be downloaded at the Wattdepot-cli Google Project site by clicking here.

By using all of the SE tools we have been introduced thus far (ant, SVN, Google Code, and Hudson) we were able to effectively develop the project concurrently. The use of ant's automated builds and quality assurance tools ensured our project would run, SVN allowed both of us to commit changes to a single repository so each of us would have the most up-to-date classes, and Hudon recorded our build progress as well as our projects overall "build" health.

For the most part our system does run functionally and is able to process each command and return output. However even with the extension given to us by Prof. Johnson, neither of us found time to create test cases in order to thoroughly test our system. We also didn't break up the methods into their own classes, although it shouldn't be very difficult to do so as all the methods are independent and don't output directly to the console, all return strings. We just need to create 2 new packages "command" and "processor."All the coding is well commented and all java files pass verify.build.xml.

There were some minor work-arounds that we discovered worked better in the end. One of them is, if there were big changes done to the system, (i.e. refactoring of methods to suit a more dispatch style of command mapping) it was best to just have the person with the old version delete their existing source code and "update" so they will have the most recent version. Also, we had a little trouble organizing how the command arguments should be processed. After Prof. Johnson's lecture of HashMaps we got some insight on how to proceed.

A good portion of time was spent reading the Wattdepot API This was the hinder in work progress. All the commands were laid out for us, which can be found here we just needed to know how to access the data.

In the beginning it was hard to split the work. Previous ICS classes dealt with solo projects and sharing of code was not allowed, Bao and I were a little stumped as to who should do what. What we both realized is that it was easier to code in modules and have separate classes for each command, that way we don't run into any merge conflicts if we are modifying the same file. However in our situation we saw each other daily so merge conflicts weren't that much of a problem since we worked in close proximity.

Monday, November 2, 2009

Hudson + SCM = <3

For this week in the Software Engineering class, we were introduced to another Software Engineering tool, Hudson, which is a continuous integration tool. Currently I am working on a branch CLI (command-line interface) for the Wattdepot-CLI project, there are about 13 other branches also in the making.

Continuous integration is when you have your software project automatically updated after someone does a commit on the project. Shortly after a commit has been made you can setup your CI (Continuous Integration) tool to poll your Software Configuration Magangement and see if the commit that was made breaks the system. This is extremely helpful when it comes to having the most up-to-date WORKING system.

Some of the features Hudson includes is an online console where you can view in real-time or an archive of a build so you can see where the build messed up. You can have it setup to build continuously or simply poll the SCM to see if the system Hudson has is the most recent version. In our case we have it setup to poll every 5 minutes.

When a commit does break the system Hudson is able to send out an e-mail notification to all users on the project (we use a Googlegroup to accomplish this) this in effect creates a more stable system since all developers will see the error and hopefully try and correct it. Hudson also records how many test cases are run and graphically shows this on a #number of cases vs. time relation.

Setting up Hudson was relatively straight forward, the screencast that was made really broke it down in steps. All you really need to know is the URL to the trunk of your project. What's also nice about Hudson is that it checksout your project anomously so you don't have to give it sensitive information like a username/password to build your project.

I didn't know what to expect from Continuous Integration, other than listening to Prof. Johnson's Screencast about it being great it is, but after almost a week under CI, I can see that it is indeed a powerful tool to use to create a more stable, growing system.

I can't really say there's a bad thing about using Hudson other than the ton of spam I get for every build success and fail I do along with all the other branches, but that's easily dealt with. For this small two-person project, I can say I could live without it, but for larger projects that require groups of developers I can't see myself not using a CI to ensure every section of the software is up and running.