Wednesday, December 9, 2009

Greenometer v2.0: Saving Green$ While Going Green

The final project for my Software Engineering class was to polish my group's first iteration of the Wicket Application: Greenometer. Continuing from my previous post about creating Greenometer, this blog post is my final overall thoughts on the whole process. The final specifications for version 2.0 were released here.

Greenometer v2.0 can be downloaded here.

Prof. Johnson had originally planned for us to make a v1.1 release, specifications found here, he decided to forgo the due date of v1.1 and have v2.0 due instead, reason being v2.0 should have all revisions and changes that would have been made in v1.1.

So, for the last month of the semester I have been constantly working around the clock with my group members to get the specifications for v2.0 done. I have gotten more acquainted with using Wicket in combination with Java and took on various tasks that dealt with the actual UI of our interface, to the actual source code, and testing.

My journey into first semester Software Engineering - ICS 413 certainly gave me a little taste of every aspect of being a software engineer. I learned the group collaboration process, which entails splitting group work, getting group members motivated to participate, and making sure the project is still on schedule, all while having an overall great time. The coding aspects seemed to revolve around learning how to create test cases and learning the different between high coverage and quality coverage. There is definitely a difference between creating a functional interface versus a quality interface.

Prof. Johnson had stressed that although creating a quality end product is the ultimate goal, a partially complete system that's high quality is better than having a system that's partially tested and mostly functional.

Our final project features 5 pages:
  • Home
  • Stop Light
  • Grid
  • Concepts
  • Contact Us
As noted in my previous post regarding this project, the functionality of this application is to provide users with detailed information about the Oahu power grid. In an attempt to gain users attention about Going Green and saving energy. Our simulated application allows its users to view the energy generated at specific sources given a date or range of dates.

Project Details

Home
The home page briefly describes the applications purpose and it's functionality, mainly outlining the other 4 pages; Stop Light, Grid, Concepts, and Contact Us.

Stop Light
This page features a stop light that shows either red, yellow, or green depending on the Carbon Intensity (lbs CO2 / MWh) of the current time. This pages queries the WattDepot and gathers carbon intensities for the whole day, then determines the color value for the current time.

Grid
The grid features a Google Chart, either a Bar or Line chosen by the user, based on the Energy Emitted (MWh). The user can also specify which source he/she wants to view. It's currently defaulted to the overall Oahu power grid, but can be changed via a drop down menu.

Concepts
This page outlines the motivation for the application and gives in-depth explanation how how to understand and use the Grid and Stop Light pages.

Contact Us
Provides the projects Google Site, Discussion Group, and a link to each developers Professional Portfolio.

From Robots to Energy Grids: Software Engineering Reflection

Prof. Johnson's ICS Software Engineering class was a unique experience this semester. It was the first class that actually made me feel like a Computer Scientist. Working towards a real-world problem, collaborating with others, and learning new tools all made it surreal and thrilling. Granted this course takes a lot out of you, at times I felt like the assignments didn't bear enough time and that I had to focus most of my energies into this course. This course felt like a job and my pay was my final grade.

What I found most valuable were the new software engineering tools such as Ant, Subversion, and Quality Assurance tools, JUnit, Checklist, PMD, and FindBugs. For the assignments that we did, from Robocode, to Wattdepot, to Wicket, it was hard to find a balance between Quality and Functionality. Prof. Johnson constantly encouraged us to create test cases as we developed our projects, but I found it difficult to come up with what aspects to test. Testing takes a considerable amount of time with every run of Ant. JUnit would take an excess amount of time to complete and it seemed like it hindered the overall time spent waiting for the project to pass verify.

The group aspect gave me another take at what it is to collaborate with others, especially with those in my field. It seemed that the group went only as far as the strongest person. Sure everyone has their forte, but doing everything shouldn't be one of them. It was hard to find an even split of work for everyone to do. There was always some aspect that bled into another, and it was hard not to just work on a single method or class without waiting for someone to commit their changes.

We were introduced to an array of technologies and experienced many different aspects of being a software engineer. We were fortunate enough to have coded in Java throughout the semester, a language that has been drilled into my brain every since my beginning semesters as an ICS student. Though, the one thing that I would like to learn more about is the framework for planning. Splitting the work load is always a big issue. How do you come across as knowledgeable rather than bossy? Or try and get someone to carry their own weight in a tactful way that doesn't sound too threatening. Indeed, it might have turned out differently if we had everyone in our group share what are strengths and weaknesses were. However, group work isn't something that is deeply encourage in the lower levels of ICS, so it takes some getting used to the concept of collaboration and the sharing of ideas.

Learning a new system and how to use it such as WattDepotClient, and learning a new language, via Wicket, are the types of things expected of every Computer Scientist in order to keep up in this field.

Tuesday, November 24, 2009

Gander at Greensmart: The Other Wicket Web App

For this week in my Software Engineering class, after the first implementation of Wicket applications I was assigned to evaluate Greensmart's Wicket Application.

The details for the assignment were much like reviewing the WattDepotCLI's in my previous post, but a few modifications were made to the review criteria. The review criteria for this assignment can be found here.

Click here to go to Greensmart's GoogleCode Page
My full review of the system in PDF format can be downloaded here

My overall thoughts:
Overall I think the project fulfills the assignment on the basic of levels. It does take in a date and outputs the Carbon Intensity throughout the day. But, without ever reading the homepage of their GoogleCode site, it’s hard to make out the purpose of the application. These are all just minor things that can be fixed given some more time.

The source code itself could use more descriptive JavaDocs and more in-line comments explaining implementation details. A few tweaks as to a more descriptive web application is also in order. They might also want to work on getting colors in the cells rather than having text. One last thing about the interface is that it should provide an overall conclusion based on the data being displayed (i.e. Carbon usage is currently high, hold off on extensive wattage for X hours…just something to think about).

The group appears to be working functionally, judging from the Hackystat sensor logs I could see certain people doing beginning tasks, then towards the end, the other members took over.

Monday, November 23, 2009

Tag Teaming Wicket and WattDepot: Front End Web Apps

For this week in my Software Engineering class, we were to combine the groups we had for the WattDepotCLI with another and design a front-end web application for the WattDepot service which emulated the Ecotricity UK Grid Live

Our the most current distribution can be downloaded at the Greenometer GoogleCode site.

The type of web application we were instructed to code in was using Wicket. Wicket is also an open-source technology made by Apache. It was chosen because of the time frame we were to develop our project, and because of its ease of integration with Java. It took a while to get used to Wickets coding technique. Reading the book was semi-helpful, most most insight came from the ICS Wicket Examples. From these two resources I gained a basic understanding of how Wicket interacts with Java.

Creating a project as a duo seemed to be hard enough to deal with work delegation. In this assignment, two groups were combined to make a 4-man team. At first we didn't know what to do, at the first workday of the assignment I had managed to do the initial set up with GoogleCode, GoogleGroups, Hudson, and Hackystat. But after that it was a toss up. Luckily one of my group members, Kelvin Green stepped up and took charge. He pretty much set the ground work for the whole project. The communication thrived within the group. We talked through AIM, exchanged e-mails regularly, and asked questions when we needed clarification.

The overall design of the system I think turned out pretty organized. We used the GoogleChart system to create a nice breakdown of how the carbon intensity was throughout a specific day. The form fields in on the page were broken up into 3 sections to avoid most formatting errors, and any invalid input can be easily isolated. One thing that bothered me was we didn't output a specific number breakdown of the lbs of CO2 / MWh. The graph looked nice, but it only gives an estimate on the current carbon intensity. However, would the average consumer understand the specifics or just the color of green, yellow, and red when to use certain appliances.

For this project we also used the Hackystat Software ICU. Work had already commenced before the sensors were fully installed properly, so some data may not depict an accurate progression of the project.

Here's a screen capture of our Vital Signs for our Project covering the past week:
11 / 16 to 11 / 23


Vital Sign Analysis:
Coverage: The coverage fluctuated throughout the week, but it pretty much stayed at 85%. Aside from a few test cases which ran all the methods, coverage overtime was pretty flat.
Complexity: We can see immediately that there is a rising "red" trend probably due to the amount of code that's compact into each method. Though the trend seems to not look good, the immediate value is green showing that the system as is not that complex.
Coupling: Again here, the same argument for Complexity can be applied. Since most of the methods are found in the application java file, there's increased coupling because that 1 application java file uses all the other java files.
Churn: This is one of the sensors that didn't get installed correctly and was corrected sometime after development had commenced. Although the churn is currently red at the moment, the trend seems to be decreasing.
DevTime: There's a high spike in development time, as this was another sensor that didn't install correctly. The high spike can be best explained by the rush to get everything done by the due date.

Overall, the system is pretty unique on it's own compared to other systems. From this experience, the most challenging factor is having someone step up and do the initial tasks to get everything set-up. Once that's done, there's an even harder task of who is to do what. Task delegation is the hardest part, it seems as though there's always someone that works the most and someone who does the least.

What's nice about having someone delegate tasks is that everyone knows what they're doing, but that person in charge will have to go the extra mile of organizing. I have yet to be in a group where everyone is at equal level and knows that the task at hand is, but even then, I think splitting the workload is a task in itself.

Monday, November 16, 2009

Enter WattDepotCLI Branch Umi v2.0-ish

WattDepotCLI Branch Umi v.2 can be downloaded here.

After reading through all the reviews/comments made on Branch Umi, it was time to put those review to good use and polish up our system. This assignment has been like no other in my previous ICS classes. The major difference being working with a partner. I know I could've sworn I wish I could work with a partner in the lower entry Java classes. It's not until now I can see the advantages and disadvantages of doing so. Using SE tools such as Ant, SVN, and automated quality assurance tools did make the sharing process easier, but the one element that cannot be done by a computer is the cooperation and output done by another person. What I mean by that can be said as "Two heads are better than one," throughout this WattDepot experience sometimes this was true and other times it wasn't.

For the last stages of version 2 implementation, we were introduced to the Hackystat Project, which is sort of a Software ICU (Intensive Care Unit), which monitors our WattDepot projects "vital signs", such as Commits, Lines of Code, complexity, coupling, etc. Hudson just barely scrapes the idea of this and Hackystat takes project monitoring to a whole new level.

The initial implementation of version 1 of Branch Umi satisfied the functionality requirements set for v1.0, but a lot of the reviews that were done said we now need to focus on creating a high quality design complete with test cases and separate packages for the classes. The system does not implement all command set by the specifications of version 2.0, the command that was not implemented was the last command 2.13 carboncontent. My partner and I decided to forgo 2.13 and focus on quality with what we have. The test cases that we have only check that each command outputs known values, we did not test for invalid inputs. The quality of our system could have been better through test cases, but we have nothing to show for it. I tried to delegate this task to my partner, but I ended up doing some of the basic tests. With the test cases that we do have, Emma reports:
  • class: 86% (18/21)
  • method: 84% (38/45)
  • block: 55% (2185/3980)
  • line: 61% (464.3/755)
The coverage on the surface looks pretty good. Most of the methods and classes have been used throughout the tests, but once you look at block and line portions of the report, we can see that roughly half of all code we implemented is being tested. So even though the first few lines seem okay, when it comes down to the last 2 it shows the "real" extent of the tests.

As far as group process, I am a little disappointed we didn't meet more often. There was also the problem of what do we actually do when we did meet. Obviously the task at hand was to complete the project, but how do we go about splitting it up. My partner seemed like he just wanted to be told what to do than actually taking initiative in doing things. Which is fine by me except when the person doesn't put out... I feel as though if I were to take more initiative and really did tell him what to do I'd feel bossy, which I don't, but I guess with what's at stake it can't be helped, but I digress.

The Hackystat sensors we installed in our project was something that I thought was interesting and cool. Seeing the colored bars on the progress of our project really adds perspective in the health of our project. We can also see where our project is heading, whether the changes we make are for the better or actually worsen the condition. It sure beats looking at Sunny/Cloudy/Thunderstorm icon in Hudson, and provides more meaningful information both currently and in trend.
Here's a screen shot of that latest Hackystat Analysis for Branch Umi:

(You'll have to click it to get a larger zoom)

We can see that there's a mixture of red and greed. The coverage from our test cases has been steadily increasing, but the number 62.0 is yellow meaning the coverage is about average, but not that great. Complexity is surprisingly high, but this is probably due to the large amount of if-statements used to check the form of each command. Coupling looks great, this is due to the splitting of each command into their own Java class. It took awhile to get everything configured properly for Hackystat to receive sensor data both from our Command Lines and Eclipse so Hackystat could have missed about a days worth of data. So the last parts DevTime, Commit, Build and Test show some activity but it's hard to get a clear sense of where it's headed.

The last part of the assignment we were to answer a few questions to test our functionality of our WattDepot implementation. Unfortunately we did not get to finish implementing the last command, carboncontent, so we are unable to do the last two questions.

What day and time during the month was Oahu energy usage at its highest? How many MW was this?
Command: power generated SIM_OAHU_GRID timestamp 2009-11-26T20:00:00.00-10:00
9.95E2

What day and time during the month was Oahu energy usage at its lowest? How many MW was this?
Command: power generated SIM_OAHU_GRID timestamp 2009-11-28T02:45:00.000-10:00
4.96E2

What day during the month did Oahu consume the most energy? How many MWh was this?
Command: powerstats generated SIM_OAHU_GRID day 2009-11-26 sampling-interval 60 statistic max
9.95E2

What day during the month did Oahu consume the least energy? How many MWh was this?
Command: powerstats generated SIM_OAHU_GRID day 2009-11-26 sampling-interval 60 statistic min
4.93E2

What day during the month did Oahu emit the most carbon (i.e. the "dirtiest" day)? How many lbs of carbon were emitted?

What day during the month did Oahu emit the least carbon (i.e. the "cleanest" day)? How many lbs of carbon were emitted?

In order to get the correct dates to enter, I actually had to pull up excess data and stare at a bunch of output until I've seen the appropriate number. From there, I simply recorded the timestamp and entered the corresponding command to elicit the correct output.

I've also asked another developer, Kendyll Doi, about how to approach this problem. He had an interesting idea of using the Chart command to get a general idea of peaks and valleys during each day, then using the powerstats command trying different timestamps to verify the minimum and maximum energy.

Wednesday, November 11, 2009

Outside Insight on Wattdepot-Cli Branch Umi

In my previous post I had completed reviewing two other branches of the Wattdepot-cli; projects Eha and Elima. Yesterday, my partner and I sat down and reviewed the comments that were left by those who had our branch as their assignment.

The reviewers for our branch were:

For the most part, my partner and I generally knew where are system lacked. Most of the comments centered around test cases, as we had none, package documentation, and overall design of the system. There were some specific errors that were mentioned, but those could be fixed without any major repercussions.

What we need to do is to:
  • CREATE TEST CASES!
  • Separate each method into it's own class
  • Have separate packages for processor and command
  • Add more descriptive JavaDocs and explain how each package interacts with each other
  • Re-code some lines because the Wattdepot library was recently updated

Reviewing other branches made me realize there were more efficient ways of doing things such as creating a Parent list of all sources, and when writing out to a file, how to display the absolute file path so the user knows exactly where to look to find the output file. I will end up adapting code from other branches to make our source code look more elegant and straight forward. The checklist that Prof. Johnson provided made the review experience straight forward and pleasant. Now that I have a basis for reviewing code, I know what things to spot for when reviewing future projects both mine and others.

Sunday, November 8, 2009

Review of Wattdepot-CLI Branches: Eha and Elima

Following the same cycle as we did for Robocode, after our first deployment of each branch of Wattdepot-CLI, it came time to for the review. Our assignment this week for my Software Engineering class was to review two other branches of Wattdepot-CLI. My two assigned branches were Eha and Elima.

Eha's distribution version 1.0 can be downloaded here.
Elima's distribution version 1.0 can be downloaded here.

For this assignment, Prof. Johnson gave us a checklist that we followed to review each branch. The basic run down of the review was to make sure it builds successfully passing automated quality assurance tools (PMD, Checkstyle, Findbugs, JUnit), check the functionality, source code, and overall design of the system.

Instead of posting my two reviews directly in the post, I've uploaded PDF versions which can be downloaded at the following:
I'll also summarize my thoughts on each of the branches:

Branch Eha
Eha was the first branch I reviewed. I was able to build and run all the commands. Only the "list sources" didn't work. The one thing that I disliked was the lack/random placement of error messages. Generally, if there was something wrong the the command entered, i.e. missing params, or a keyword (genereated|consumed) was misspelled, the the system would seemingly dismiss the input and bring up a blank prompt. It was good that the system could tell there was something wrong, but what would be better is if the system could respond with "what" was wrong with the entered command. Other than that they need to just split the rest of the code into their respective classes and divide them into processor and command packages and create more in-depth test cases.

Branch Elima
This branch was sort of the opposite of Eha. Although they had the correct design structure, which I failed to do in my branch, (i.e. separate classes, separate packages) not all the commands worked. I had to look at the source code before I was supposed to in the checklist. What I discovered was that although they had separate classes for each command and each contained source code and weren't emtpy, not all the commands were mapped onto the HashMap. This made it slightly irritating/cumbersome to test if the system implemented the command. I tried looking at the blogs of both developers on the branch to gain insight as to which commands weren't implemented, but neither of them mentioned that they didn't map the commands onto the HashMap. I spent a good 30 minutes trying to figure out which commands the system did accept. Overall, a well organized system that needs to be filled in with complete commands and develop test cases.