Wednesday, December 9, 2009

Greenometer v2.0: Saving Green$ While Going Green

The final project for my Software Engineering class was to polish my group's first iteration of the Wicket Application: Greenometer. Continuing from my previous post about creating Greenometer, this blog post is my final overall thoughts on the whole process. The final specifications for version 2.0 were released here.

Greenometer v2.0 can be downloaded here.

Prof. Johnson had originally planned for us to make a v1.1 release, specifications found here, he decided to forgo the due date of v1.1 and have v2.0 due instead, reason being v2.0 should have all revisions and changes that would have been made in v1.1.

So, for the last month of the semester I have been constantly working around the clock with my group members to get the specifications for v2.0 done. I have gotten more acquainted with using Wicket in combination with Java and took on various tasks that dealt with the actual UI of our interface, to the actual source code, and testing.

My journey into first semester Software Engineering - ICS 413 certainly gave me a little taste of every aspect of being a software engineer. I learned the group collaboration process, which entails splitting group work, getting group members motivated to participate, and making sure the project is still on schedule, all while having an overall great time. The coding aspects seemed to revolve around learning how to create test cases and learning the different between high coverage and quality coverage. There is definitely a difference between creating a functional interface versus a quality interface.

Prof. Johnson had stressed that although creating a quality end product is the ultimate goal, a partially complete system that's high quality is better than having a system that's partially tested and mostly functional.

Our final project features 5 pages:
  • Home
  • Stop Light
  • Grid
  • Concepts
  • Contact Us
As noted in my previous post regarding this project, the functionality of this application is to provide users with detailed information about the Oahu power grid. In an attempt to gain users attention about Going Green and saving energy. Our simulated application allows its users to view the energy generated at specific sources given a date or range of dates.

Project Details

Home
The home page briefly describes the applications purpose and it's functionality, mainly outlining the other 4 pages; Stop Light, Grid, Concepts, and Contact Us.

Stop Light
This page features a stop light that shows either red, yellow, or green depending on the Carbon Intensity (lbs CO2 / MWh) of the current time. This pages queries the WattDepot and gathers carbon intensities for the whole day, then determines the color value for the current time.

Grid
The grid features a Google Chart, either a Bar or Line chosen by the user, based on the Energy Emitted (MWh). The user can also specify which source he/she wants to view. It's currently defaulted to the overall Oahu power grid, but can be changed via a drop down menu.

Concepts
This page outlines the motivation for the application and gives in-depth explanation how how to understand and use the Grid and Stop Light pages.

Contact Us
Provides the projects Google Site, Discussion Group, and a link to each developers Professional Portfolio.

From Robots to Energy Grids: Software Engineering Reflection

Prof. Johnson's ICS Software Engineering class was a unique experience this semester. It was the first class that actually made me feel like a Computer Scientist. Working towards a real-world problem, collaborating with others, and learning new tools all made it surreal and thrilling. Granted this course takes a lot out of you, at times I felt like the assignments didn't bear enough time and that I had to focus most of my energies into this course. This course felt like a job and my pay was my final grade.

What I found most valuable were the new software engineering tools such as Ant, Subversion, and Quality Assurance tools, JUnit, Checklist, PMD, and FindBugs. For the assignments that we did, from Robocode, to Wattdepot, to Wicket, it was hard to find a balance between Quality and Functionality. Prof. Johnson constantly encouraged us to create test cases as we developed our projects, but I found it difficult to come up with what aspects to test. Testing takes a considerable amount of time with every run of Ant. JUnit would take an excess amount of time to complete and it seemed like it hindered the overall time spent waiting for the project to pass verify.

The group aspect gave me another take at what it is to collaborate with others, especially with those in my field. It seemed that the group went only as far as the strongest person. Sure everyone has their forte, but doing everything shouldn't be one of them. It was hard to find an even split of work for everyone to do. There was always some aspect that bled into another, and it was hard not to just work on a single method or class without waiting for someone to commit their changes.

We were introduced to an array of technologies and experienced many different aspects of being a software engineer. We were fortunate enough to have coded in Java throughout the semester, a language that has been drilled into my brain every since my beginning semesters as an ICS student. Though, the one thing that I would like to learn more about is the framework for planning. Splitting the work load is always a big issue. How do you come across as knowledgeable rather than bossy? Or try and get someone to carry their own weight in a tactful way that doesn't sound too threatening. Indeed, it might have turned out differently if we had everyone in our group share what are strengths and weaknesses were. However, group work isn't something that is deeply encourage in the lower levels of ICS, so it takes some getting used to the concept of collaboration and the sharing of ideas.

Learning a new system and how to use it such as WattDepotClient, and learning a new language, via Wicket, are the types of things expected of every Computer Scientist in order to keep up in this field.

Tuesday, November 24, 2009

Gander at Greensmart: The Other Wicket Web App

For this week in my Software Engineering class, after the first implementation of Wicket applications I was assigned to evaluate Greensmart's Wicket Application.

The details for the assignment were much like reviewing the WattDepotCLI's in my previous post, but a few modifications were made to the review criteria. The review criteria for this assignment can be found here.

Click here to go to Greensmart's GoogleCode Page
My full review of the system in PDF format can be downloaded here

My overall thoughts:
Overall I think the project fulfills the assignment on the basic of levels. It does take in a date and outputs the Carbon Intensity throughout the day. But, without ever reading the homepage of their GoogleCode site, it’s hard to make out the purpose of the application. These are all just minor things that can be fixed given some more time.

The source code itself could use more descriptive JavaDocs and more in-line comments explaining implementation details. A few tweaks as to a more descriptive web application is also in order. They might also want to work on getting colors in the cells rather than having text. One last thing about the interface is that it should provide an overall conclusion based on the data being displayed (i.e. Carbon usage is currently high, hold off on extensive wattage for X hours…just something to think about).

The group appears to be working functionally, judging from the Hackystat sensor logs I could see certain people doing beginning tasks, then towards the end, the other members took over.

Monday, November 23, 2009

Tag Teaming Wicket and WattDepot: Front End Web Apps

For this week in my Software Engineering class, we were to combine the groups we had for the WattDepotCLI with another and design a front-end web application for the WattDepot service which emulated the Ecotricity UK Grid Live

Our the most current distribution can be downloaded at the Greenometer GoogleCode site.

The type of web application we were instructed to code in was using Wicket. Wicket is also an open-source technology made by Apache. It was chosen because of the time frame we were to develop our project, and because of its ease of integration with Java. It took a while to get used to Wickets coding technique. Reading the book was semi-helpful, most most insight came from the ICS Wicket Examples. From these two resources I gained a basic understanding of how Wicket interacts with Java.

Creating a project as a duo seemed to be hard enough to deal with work delegation. In this assignment, two groups were combined to make a 4-man team. At first we didn't know what to do, at the first workday of the assignment I had managed to do the initial set up with GoogleCode, GoogleGroups, Hudson, and Hackystat. But after that it was a toss up. Luckily one of my group members, Kelvin Green stepped up and took charge. He pretty much set the ground work for the whole project. The communication thrived within the group. We talked through AIM, exchanged e-mails regularly, and asked questions when we needed clarification.

The overall design of the system I think turned out pretty organized. We used the GoogleChart system to create a nice breakdown of how the carbon intensity was throughout a specific day. The form fields in on the page were broken up into 3 sections to avoid most formatting errors, and any invalid input can be easily isolated. One thing that bothered me was we didn't output a specific number breakdown of the lbs of CO2 / MWh. The graph looked nice, but it only gives an estimate on the current carbon intensity. However, would the average consumer understand the specifics or just the color of green, yellow, and red when to use certain appliances.

For this project we also used the Hackystat Software ICU. Work had already commenced before the sensors were fully installed properly, so some data may not depict an accurate progression of the project.

Here's a screen capture of our Vital Signs for our Project covering the past week:
11 / 16 to 11 / 23


Vital Sign Analysis:
Coverage: The coverage fluctuated throughout the week, but it pretty much stayed at 85%. Aside from a few test cases which ran all the methods, coverage overtime was pretty flat.
Complexity: We can see immediately that there is a rising "red" trend probably due to the amount of code that's compact into each method. Though the trend seems to not look good, the immediate value is green showing that the system as is not that complex.
Coupling: Again here, the same argument for Complexity can be applied. Since most of the methods are found in the application java file, there's increased coupling because that 1 application java file uses all the other java files.
Churn: This is one of the sensors that didn't get installed correctly and was corrected sometime after development had commenced. Although the churn is currently red at the moment, the trend seems to be decreasing.
DevTime: There's a high spike in development time, as this was another sensor that didn't install correctly. The high spike can be best explained by the rush to get everything done by the due date.

Overall, the system is pretty unique on it's own compared to other systems. From this experience, the most challenging factor is having someone step up and do the initial tasks to get everything set-up. Once that's done, there's an even harder task of who is to do what. Task delegation is the hardest part, it seems as though there's always someone that works the most and someone who does the least.

What's nice about having someone delegate tasks is that everyone knows what they're doing, but that person in charge will have to go the extra mile of organizing. I have yet to be in a group where everyone is at equal level and knows that the task at hand is, but even then, I think splitting the workload is a task in itself.

Monday, November 16, 2009

Enter WattDepotCLI Branch Umi v2.0-ish

WattDepotCLI Branch Umi v.2 can be downloaded here.

After reading through all the reviews/comments made on Branch Umi, it was time to put those review to good use and polish up our system. This assignment has been like no other in my previous ICS classes. The major difference being working with a partner. I know I could've sworn I wish I could work with a partner in the lower entry Java classes. It's not until now I can see the advantages and disadvantages of doing so. Using SE tools such as Ant, SVN, and automated quality assurance tools did make the sharing process easier, but the one element that cannot be done by a computer is the cooperation and output done by another person. What I mean by that can be said as "Two heads are better than one," throughout this WattDepot experience sometimes this was true and other times it wasn't.

For the last stages of version 2 implementation, we were introduced to the Hackystat Project, which is sort of a Software ICU (Intensive Care Unit), which monitors our WattDepot projects "vital signs", such as Commits, Lines of Code, complexity, coupling, etc. Hudson just barely scrapes the idea of this and Hackystat takes project monitoring to a whole new level.

The initial implementation of version 1 of Branch Umi satisfied the functionality requirements set for v1.0, but a lot of the reviews that were done said we now need to focus on creating a high quality design complete with test cases and separate packages for the classes. The system does not implement all command set by the specifications of version 2.0, the command that was not implemented was the last command 2.13 carboncontent. My partner and I decided to forgo 2.13 and focus on quality with what we have. The test cases that we have only check that each command outputs known values, we did not test for invalid inputs. The quality of our system could have been better through test cases, but we have nothing to show for it. I tried to delegate this task to my partner, but I ended up doing some of the basic tests. With the test cases that we do have, Emma reports:
  • class: 86% (18/21)
  • method: 84% (38/45)
  • block: 55% (2185/3980)
  • line: 61% (464.3/755)
The coverage on the surface looks pretty good. Most of the methods and classes have been used throughout the tests, but once you look at block and line portions of the report, we can see that roughly half of all code we implemented is being tested. So even though the first few lines seem okay, when it comes down to the last 2 it shows the "real" extent of the tests.

As far as group process, I am a little disappointed we didn't meet more often. There was also the problem of what do we actually do when we did meet. Obviously the task at hand was to complete the project, but how do we go about splitting it up. My partner seemed like he just wanted to be told what to do than actually taking initiative in doing things. Which is fine by me except when the person doesn't put out... I feel as though if I were to take more initiative and really did tell him what to do I'd feel bossy, which I don't, but I guess with what's at stake it can't be helped, but I digress.

The Hackystat sensors we installed in our project was something that I thought was interesting and cool. Seeing the colored bars on the progress of our project really adds perspective in the health of our project. We can also see where our project is heading, whether the changes we make are for the better or actually worsen the condition. It sure beats looking at Sunny/Cloudy/Thunderstorm icon in Hudson, and provides more meaningful information both currently and in trend.
Here's a screen shot of that latest Hackystat Analysis for Branch Umi:

(You'll have to click it to get a larger zoom)

We can see that there's a mixture of red and greed. The coverage from our test cases has been steadily increasing, but the number 62.0 is yellow meaning the coverage is about average, but not that great. Complexity is surprisingly high, but this is probably due to the large amount of if-statements used to check the form of each command. Coupling looks great, this is due to the splitting of each command into their own Java class. It took awhile to get everything configured properly for Hackystat to receive sensor data both from our Command Lines and Eclipse so Hackystat could have missed about a days worth of data. So the last parts DevTime, Commit, Build and Test show some activity but it's hard to get a clear sense of where it's headed.

The last part of the assignment we were to answer a few questions to test our functionality of our WattDepot implementation. Unfortunately we did not get to finish implementing the last command, carboncontent, so we are unable to do the last two questions.

What day and time during the month was Oahu energy usage at its highest? How many MW was this?
Command: power generated SIM_OAHU_GRID timestamp 2009-11-26T20:00:00.00-10:00
9.95E2

What day and time during the month was Oahu energy usage at its lowest? How many MW was this?
Command: power generated SIM_OAHU_GRID timestamp 2009-11-28T02:45:00.000-10:00
4.96E2

What day during the month did Oahu consume the most energy? How many MWh was this?
Command: powerstats generated SIM_OAHU_GRID day 2009-11-26 sampling-interval 60 statistic max
9.95E2

What day during the month did Oahu consume the least energy? How many MWh was this?
Command: powerstats generated SIM_OAHU_GRID day 2009-11-26 sampling-interval 60 statistic min
4.93E2

What day during the month did Oahu emit the most carbon (i.e. the "dirtiest" day)? How many lbs of carbon were emitted?

What day during the month did Oahu emit the least carbon (i.e. the "cleanest" day)? How many lbs of carbon were emitted?

In order to get the correct dates to enter, I actually had to pull up excess data and stare at a bunch of output until I've seen the appropriate number. From there, I simply recorded the timestamp and entered the corresponding command to elicit the correct output.

I've also asked another developer, Kendyll Doi, about how to approach this problem. He had an interesting idea of using the Chart command to get a general idea of peaks and valleys during each day, then using the powerstats command trying different timestamps to verify the minimum and maximum energy.

Wednesday, November 11, 2009

Outside Insight on Wattdepot-Cli Branch Umi

In my previous post I had completed reviewing two other branches of the Wattdepot-cli; projects Eha and Elima. Yesterday, my partner and I sat down and reviewed the comments that were left by those who had our branch as their assignment.

The reviewers for our branch were:

For the most part, my partner and I generally knew where are system lacked. Most of the comments centered around test cases, as we had none, package documentation, and overall design of the system. There were some specific errors that were mentioned, but those could be fixed without any major repercussions.

What we need to do is to:
  • CREATE TEST CASES!
  • Separate each method into it's own class
  • Have separate packages for processor and command
  • Add more descriptive JavaDocs and explain how each package interacts with each other
  • Re-code some lines because the Wattdepot library was recently updated

Reviewing other branches made me realize there were more efficient ways of doing things such as creating a Parent list of all sources, and when writing out to a file, how to display the absolute file path so the user knows exactly where to look to find the output file. I will end up adapting code from other branches to make our source code look more elegant and straight forward. The checklist that Prof. Johnson provided made the review experience straight forward and pleasant. Now that I have a basis for reviewing code, I know what things to spot for when reviewing future projects both mine and others.

Sunday, November 8, 2009

Review of Wattdepot-CLI Branches: Eha and Elima

Following the same cycle as we did for Robocode, after our first deployment of each branch of Wattdepot-CLI, it came time to for the review. Our assignment this week for my Software Engineering class was to review two other branches of Wattdepot-CLI. My two assigned branches were Eha and Elima.

Eha's distribution version 1.0 can be downloaded here.
Elima's distribution version 1.0 can be downloaded here.

For this assignment, Prof. Johnson gave us a checklist that we followed to review each branch. The basic run down of the review was to make sure it builds successfully passing automated quality assurance tools (PMD, Checkstyle, Findbugs, JUnit), check the functionality, source code, and overall design of the system.

Instead of posting my two reviews directly in the post, I've uploaded PDF versions which can be downloaded at the following:
I'll also summarize my thoughts on each of the branches:

Branch Eha
Eha was the first branch I reviewed. I was able to build and run all the commands. Only the "list sources" didn't work. The one thing that I disliked was the lack/random placement of error messages. Generally, if there was something wrong the the command entered, i.e. missing params, or a keyword (genereated|consumed) was misspelled, the the system would seemingly dismiss the input and bring up a blank prompt. It was good that the system could tell there was something wrong, but what would be better is if the system could respond with "what" was wrong with the entered command. Other than that they need to just split the rest of the code into their respective classes and divide them into processor and command packages and create more in-depth test cases.

Branch Elima
This branch was sort of the opposite of Eha. Although they had the correct design structure, which I failed to do in my branch, (i.e. separate classes, separate packages) not all the commands worked. I had to look at the source code before I was supposed to in the checklist. What I discovered was that although they had separate classes for each command and each contained source code and weren't emtpy, not all the commands were mapped onto the HashMap. This made it slightly irritating/cumbersome to test if the system implemented the command. I tried looking at the blogs of both developers on the branch to gain insight as to which commands weren't implemented, but neither of them mentioned that they didn't map the commands onto the HashMap. I spent a good 30 minutes trying to figure out which commands the system did accept. Overall, a well organized system that needs to be filled in with complete commands and develop test cases.

Wednesday, November 4, 2009

Programming Duo: Wattdepot CLI

For the past couple weeks in my Software Engineering class I have been working on developing a branch of Command Line Interface for the Open Source Project Wattdepot. In essence Wattdepot is a RESTful web service that collects electricity data (such as current power utilization or cumulative power utilization) from meters and stores it in a database.

I have been working with a partner, Bao Ung and together we have developed our own branch of the Wattdepot CLI.

Our project branch is designated "umi" (as our group number was ten) and our distributed system can be downloaded at the Wattdepot-cli Google Project site by clicking here.

By using all of the SE tools we have been introduced thus far (ant, SVN, Google Code, and Hudson) we were able to effectively develop the project concurrently. The use of ant's automated builds and quality assurance tools ensured our project would run, SVN allowed both of us to commit changes to a single repository so each of us would have the most up-to-date classes, and Hudon recorded our build progress as well as our projects overall "build" health.

For the most part our system does run functionally and is able to process each command and return output. However even with the extension given to us by Prof. Johnson, neither of us found time to create test cases in order to thoroughly test our system. We also didn't break up the methods into their own classes, although it shouldn't be very difficult to do so as all the methods are independent and don't output directly to the console, all return strings. We just need to create 2 new packages "command" and "processor."All the coding is well commented and all java files pass verify.build.xml.

There were some minor work-arounds that we discovered worked better in the end. One of them is, if there were big changes done to the system, (i.e. refactoring of methods to suit a more dispatch style of command mapping) it was best to just have the person with the old version delete their existing source code and "update" so they will have the most recent version. Also, we had a little trouble organizing how the command arguments should be processed. After Prof. Johnson's lecture of HashMaps we got some insight on how to proceed.

A good portion of time was spent reading the Wattdepot API This was the hinder in work progress. All the commands were laid out for us, which can be found here we just needed to know how to access the data.

In the beginning it was hard to split the work. Previous ICS classes dealt with solo projects and sharing of code was not allowed, Bao and I were a little stumped as to who should do what. What we both realized is that it was easier to code in modules and have separate classes for each command, that way we don't run into any merge conflicts if we are modifying the same file. However in our situation we saw each other daily so merge conflicts weren't that much of a problem since we worked in close proximity.

Monday, November 2, 2009

Hudson + SCM = <3

For this week in the Software Engineering class, we were introduced to another Software Engineering tool, Hudson, which is a continuous integration tool. Currently I am working on a branch CLI (command-line interface) for the Wattdepot-CLI project, there are about 13 other branches also in the making.

Continuous integration is when you have your software project automatically updated after someone does a commit on the project. Shortly after a commit has been made you can setup your CI (Continuous Integration) tool to poll your Software Configuration Magangement and see if the commit that was made breaks the system. This is extremely helpful when it comes to having the most up-to-date WORKING system.

Some of the features Hudson includes is an online console where you can view in real-time or an archive of a build so you can see where the build messed up. You can have it setup to build continuously or simply poll the SCM to see if the system Hudson has is the most recent version. In our case we have it setup to poll every 5 minutes.

When a commit does break the system Hudson is able to send out an e-mail notification to all users on the project (we use a Googlegroup to accomplish this) this in effect creates a more stable system since all developers will see the error and hopefully try and correct it. Hudson also records how many test cases are run and graphically shows this on a #number of cases vs. time relation.

Setting up Hudson was relatively straight forward, the screencast that was made really broke it down in steps. All you really need to know is the URL to the trunk of your project. What's also nice about Hudson is that it checksout your project anomously so you don't have to give it sensitive information like a username/password to build your project.

I didn't know what to expect from Continuous Integration, other than listening to Prof. Johnson's Screencast about it being great it is, but after almost a week under CI, I can see that it is indeed a powerful tool to use to create a more stable, growing system.

I can't really say there's a bad thing about using Hudson other than the ton of spam I get for every build success and fail I do along with all the other branches, but that's easily dealt with. For this small two-person project, I can say I could live without it, but for larger projects that require groups of developers I can't see myself not using a CI to ensure every section of the software is up and running.

Sunday, October 18, 2009

Turning Knowledge Into Questions (Update)

When reading to understand, it takes a while before the full concept is realized. You must first do the initial reading, taking a moment or two at difficult points, digest the material, then review. The real test to see if one fully understands the material is if he/she can create questions based on the reading that isn't easily found just by skimming.

This is just the case for this week's assignment for my Software Engineering class. This blog post contains 10 quiz style questions that will help me and fellow classmates study for the upcoming midterm. Each student will come up with their own questions and by the end of the assignment, we will have a diverse array of questions to study from.

Topics range from the initial FizzBuzz program, multi-tasking, Java concepts, Build technology, testing, Ant, and Configuration Management.

I will amend this post with the answers later on this week.
Answers are here!

Here goes:

FizzBuzz + Anti-Patterns
1) How would you test the correctness of the FizzBuzz program?
A: You essentially want to create a unit test that tests the boundaries and the conditions that print "Fizz", "Buzz", and "FizzBuzz", namely 1, 3, 5, 15, and 100.

2) What would be a Happy-path test for the FizzBuzz program?
A: Happy-path test would be creating a single unit test that would always test a single value, let's say 1, and always return 1, causing the test to always pass.

Three Prime Directives
3) How do the Ant .xml files in the robocode-pmj-dacruzer system satisfy the Three Prime Directives?
A: The build files included allow for automated quality assurance/distribution which is a very useful tool to alleviate the tediousness of importing foreign projects. The fact that it's coded in Ant and automatically downloads necessary libraries it needs to successfully "build" a system ensures that an external user can install and use. The .xml files are direct source code which users edit, and there's also the online API for Ant.

Coding Standards
4) You created the ultimate Robocode robot that can beat any opponent, why would ensuring your code be up to standards be necessary?
A: The ability to beat another robot does not ensure readability or the ability for another user to adapt code. When you need to isolate a specific portion of code to test, have you're code could be a mess and probably would have to re-structure you're code in order to do so.

Quality Assurance
5) Your code has passed automated tests Checkstyle, PMD, FindBugs, why are automated tests not enough to ensure quality code?
A: These sorts of tests are only ensure that you're program follows standards and runs. It in no way can test the validity/correctness of your program. You will need to create an array of tests to accomplish this.

Asking Questions, Getting Answers
6) You're having trouble getting your JUnit tests for Robocode to work in Eclipse. You keep getting the error "robocode.jar not found." Given this problem, write a question in good form, you may make-up any other specific details that will aid in the diagnosis of the problem.
A: Hello, I've been trying to get my unit tests for a Robocode project to work in Eclipse, but I keep getting the error "robocode.jar not found." I've tried searching the forums to see if anyone else has this problem, but so far no one has. I'm sure I've setup my VM-arguments correctly (-Drobocode.home=) when I "Run As". I'll greatly appreciate any help/insight. Thank you for your time.

Understanding Ant Code
7) Given the following Ant code from checkstyle.build.xml:



What does it do?
A: Creates a target for checking if "checkstyle" has already been installed, if "checkstyle.config.available" does not exist, then Ant will download the necessary files from "src" url into the "dest" fo
lder giving it a timestamp.

8) Write a simple Ant statement that checks if the file robocode.jar is available in the path ${robocode.pat
h} and give it the property robocode.available.
A:


Version Control
9) Given an example where pessimistic locking is preferred, then give another example where optimistic locking is best. What does this tell you about which Version Control System to use?
A: Pessimistic locking only allows a single person to checkout a file, and places a lock while until it's been checked back in. Optimistic locking allows for multiple check outs and only places a lock when a person check's it back in with updates. On web based project or a large development team, you would want simultaneous check outs to facilitate work, so Subversion might be the right tool to use. However, if strict updates need to be kept, and you're in a small team of maybe 2/3, pessimistic might be feasible, so Revision Control System (RCS).

Open Source
10) What's the different between Free Software and Open Source Software?
A: Free software doesn't need to make its source code available and modifying and distributing the program is not necessarily allowed. Open Source software makes it source code available and modifying and distrubting the software is allowed.

How did y'all do? ^o^

Tuesday, October 13, 2009

Creating a Google Home: StrafeNShoot Repository

When working in a software development team where a group of people are working on the same project, version control becomes a key element in maintaining the most current, up-to-date distrubution of a software. Constant updates have to be taken in account so that a developer does not end up making a change that has already occured.

For this week in my Software Engineering class, we learned about Configuration Management(CM) and Version Control Systems(VCS). We were to use a specific type of VCS, namely Subversion(SVN). Since the version of SVN is dependant on your Operation System, I used TortoiseSVN on my Windows XP. In combination with the previous assigments that dealt with Ant and Build, which made distributions easy to package, distribute, and unpackage, SVN allows access to an online repository for my Robocode project StrafeNShoot, and any modifications made to the StrafeNShoot project SVN updates and commits the actions to the online repository.
The online repository we have used is Google Code, Google's developer network of open source projects.

My personal Google Code repository can be found here.

It currently stores the most up-to-date version of StrafeNShoot, summarizes the robots strategy, and contains two Wiki pages that show the User Guide and Developer Guide. You can either download the .zip format of the distribution, or can Checkout the project using SVN.

Note: Anyone can Checkout the project Anonymously, but if you would like to be a Project Member and/or commit or receive update notifications, I will have to add your Gmail account to my project. If you would like to be added, feel free to e-mail the discussion group at robocode-etm-strafenshoot-discuss@googlegroups.com

Along with Google Code, I have created a Google Discussion Group for my Google Code Project. All updates to StrafeNShoot are automatically posted in the discussion for everyone to see (including non-developers). This allows developers and others to see updates that have been going on and where the project is headed.

This weeks assignment details can be found here.

I've completed all the tasks except the known issue of the codesite-noreply@googlecode.com to the discussion group. However, I did get a workout this and instead added my Google Code project URL, robocode-etm-strafenshoot@googlecode.com to my discussion group. Then add the group's discussion e-mail (robocode-etm-strafenshoot-discuss@googlegroups.com) to the Activity Notifications in the Admin Source page of the Google Project, this allows for the updates to be e-mailed to all the discussion group members.

The most difficult task I found was using SVN. After installing SVN, you don’t actually open SVN the program, but use the Right-Click menu in the repository folder. The order of steps to use SVN was unintuitive at first. After modifying any of the files in my project, I had to first "update" the repository, then “commit” the actions. Remembering/re-pasting my Google Code project is a slight pain, but manageable.

I did accidently activate the "Adult Content Warning" when creating my Google Group. I inadvertently checked it off thinking it said that "My group does NOT contain any Adult Content". I looked through Google Group's help site, but found that only Google Administrators are able to change my Content Policy. I've posted my request on the site hoping they'll soon change it. I learned my lesson in that I should read carefully before checking off options that involve “Adult” and “content”.

Google Groups help was also useful in looking for known issues such as codesite-noreply, and already existing cases of accidental “Adult Content” activation.

Wednesday, October 7, 2009

Verifying StrafeNShoot's Strategy: Failure

The topic for this week in my Software Engineering class was about building test cases using JUnit and verifying certain aspects of my robot, StrafeNShoot. The aspects in question for movement, firing, and targeting strategies.

Here's a quick recap of StrafeNShoot's strategy:

Movement: Moves in a four corners style, starting from the upper left, lower right, upper right, then lower left. I chose this movement pattern because although it doesn't seem random, it provides a good path for my robot to always be in motion.

Targeting: The gun is initially positioned at the rear flank of the robot as it moves. When StrafeNShoot sees another robot, it attempts to keep an active lock on the target, while still maintaining it's movement patter.

Firing: As long as there's an enemy in it's sights, fire at will.

The assignment details can be found here. We were to create 6 test cases, at most 2 of them could be acceptance tests, which merely checked to see if StrafeNShoot could consistently beat another robot; behavioral tests, which tests the movement, targeting, and firing strategies; and lastly unit tests, to verify that the output of these methods were correct.

Sadly, I was only able to create 2 out of the 6 requried for the assignment, 2 of which acceptance tests.

The easiest of the tests were obviously the acceptance tests, mainly because Prof. Johnson had already included such a test in his pmj-dacruzer file which was easily adaptable, but also there was no real calculation involved, nor did I have to modify the source code of StrafeNShoot.

When it came time to implement a test for behavors, I began thinking of how I could test if StrafeNShoot does in fact move to the four corners. I then realized there was no concrete test I could do that would test for this, as StrafeNShoot "attempts" to move in the four corners style. It's possible that throughout the entire battle that StrafeNShoot will never reach a corner, since the way I coded the movement, if StrafeNShoot runs into another robot, it immediately attempts to go to the next corner without reaching the one it was trying to get at.

As for unit tests, it was hard to think of a solid unit test that I could use. The main problem with the unit tests was that I could not find a way to directly access StrafeNShoots internal variables. The trigonometry I used to calculate to turn to a corner, which would have been an ideal choice to verify, was hard coded into StrafeNShoot's source code. And the only way for me to verify the angle was correct, was by using the exact same steps I used to calculate the angle in the first place. There were other issues such as verifing that StrafeNShoot would correctly calculate the angle to turn towards the enemy.

With my test cases seriously lacking, I can firmly say that my test cases do not adequately ensure the quality of my robot. Sure it has the capacity to beat other robots, but being able to verify it's strategy without actually watching the battlefield is a whole nother problem.

For my EMMA coverage, there isn't much to tell, overall my block coverage was 43% and line coverage was 55%. But I've uploaded the whole results, which can be viewed here.

When I first started working with Robocode, I had no idea that the analysis would get this deep, I thought that the only thing that matters was coming up with a unique strategy, and beating other robots. But now, even if you have a unique strategy, it can be a pain trying to verify the things it does. When I decided to implement my strategy for StrafeNShoot, I did not code in modules, or even think about refactoring certain pieces of code. It ran sufficient enough to not look twice. But after this week, it shows how much modules, organization, and coding with verification in mind can lead to a well documented, quality source code.

My distributed StrafeNShoot package with it's lone, 2 test cases, can be downloaded here.

Wednesday, September 30, 2009

"Building" Quality Code: An Ant Experience

Working with Robocode for almost a month now, you begin to see the tediousness of some of the tasks. Some of which include, waiting for a battle to end, making sure your code is up to standards, and importing another persons Robocode robot. The current topic for this week in my Software Engineering class is Quality Assurance and Build Technology. This past week I have become familiar with the build technology - Apache's Ant in combination with Apache's Ivy - Library-Level Dependency Management Tool, to eliminate the tedious tasks.

Together with Ivy's ability to download and store the necessary libraries, and the automated build technology that is Ant, I can battle, standardize my code, and package my robot for distribution, all without ever opening Eclipse or Robocode.

Although Ant is Java-based, the build files are written completely in the XML format which means 1) No Compilers and 2) Each file can be read as source code. By using Prof. Johnson's Da-Cruzer build, I adapted it to work with my robot, StrafeNShoot.

By using the following tools:
I can eliminate long hours of staring at code looking for any rules that might have been broken by using PMD, Checkstyle, and FindBugs tools. I also can eliminate waiting for a battle to end by creating a test to see if my robot can beat another by creating a jUnit test and asserting that it has done so. In my case, I adapted Prof. Johnson's jUnit test to assert that my robot will always beat SittingDuck.

The other tools such as "jar" and "dist" make it easy to distribute my robot so other's can easily extract and battle/examine my robot. Compared to the long and painful way of importing a new project into Eclipse, and making sure all the paths are set before you can Roborumble.

Once you get Ant and Ivy set-up, this is indeed an incredible tool to automate tasks. However, the initial set-up was not so pretty. The errors that I encountered were mostly from adapting Prof. Johnson's DaCruzer build. I had not replaced all instances of his initials with my own, and same for his robot name, therefore Ant was so kind to tell me what types of problems I had and I quickly made the corrections.

I did run into some Environment Variable problems. I kept getting a "Unable to find tools.jar" error message when trying to build. Ant also gave some feedback saying it was looking for tools.jar in "C:\Program Files\Java\jre6". After searching the web for solutions, I can across the solution at this thread: http://forums.sun.com/thread.jspa?threadID=757039 Essentially I needed to declare my JAVA_HOME variable to point to the jdk folder, and the folder itself, NOT the bin, and I also remembered Ant does not work well with spaces, so I relocated my Java file directly into the C:\. So my JAVA_HOME ultimately was defined as "C:\Java\jdk1.6.0_16".

When I finally got it to run, I did the all the tests as described by the assignment and ended up with just 1 error:
  • Checkstyle - 1 error
  • PMD - 0 errors
  • FindBugs - 0 errors
I was amazed to read that my error was:

[checkstyle] C:\robocode-etm-strafenshoot\src\etm\StrafeNShoot.java:78: First sentence should end with a period.

My first sentence of the javadoc was missing a period. Something so simple, yet hard to spot, the Checkstyle tool found it in 3 seconds. The fact that I don't have to use a resource heavy IDE like Eclipse to run these tools, and can do so easily with just the command prompt, I find these tools extremely helpful and efficient. Not only can you check the quality of your work, but you can also fix your code so that it is quality based on informative feedback. Although it takes effort to set-up, once everything is set all you have to do is invoke a simple command and Ant will do the rest. =)

My automated packaged, version 1.1 StrafeNShoot can be downloaded here.

Monday, September 21, 2009

Enter StrafeNShoot - My First Competitive Robocode Robot

After two weeks of introduction to Robocode, it was time to create a competitive robot. Even looking at sample robots and collaborating with fellow classmates, it was hard to find a place to start. What type of movement, targeting, or firing strategies will be effective against other robots? For starters, it began with developing a single robot that can counter as many of the following eight sample robots: Walls, RamFire, SpinBot, Crazy, Fire, Corners, Tracker, and SittingDuck.


After much toil, I implemented this strategy:

StrafeNShoot
(Source code)

Movement: Essentially uses the 4 corners movement as a method of getting around the map. Starts to go to the upper left, and then lower right, upper right, lower left, and back to upper left; going in a perpetual cycle throughout the match.
Targeting: As it moves to each corner, the robot tries to keep the gun pointed either at the center of the battlefield, or at the enemy.
Firing: As soon as it sees another robot, it will reposition the gun and fire with a power of 2. I used power of 2 because throughout trial runs, the points gained outweighed the energy used. However, when it gets shot at or collides with another robot, it will immediately realign the gun and shoot with a power of 3. Since the robot is moving most of the time, it was difficult for other robots to hit, so it was feasible to use maximum power.


Trial Runs:

Consistently Defeated: RamFire, Crazy, Fire, Corners, Tracker, SittingDuck

I noticed that with my strategy I had an advantage over stationary/tracking type robots. Because my robot moves at drastic lengths from one side of the battlefield to the other, stationary type robots will continually keep firing and missing; expending their energy and ultimately disabling themselves.

Tracking type robots have a hard time repositioning themselves. By the time they reposition and start moving to my location, I’ve already moved onto a different heading and the tracker is just trying to keep up. Meanwhile I’m firing as I’m moving to each corner.

Although Crazy is neither a tracker nor a stationary robot, its random movements work to its disadvantage, often times it runs into my bullets, and since it only shoots when it sees a target without any type of tracking, my robot will have moved on by the time it fires.

The Troublesome Two: Walls and SpinBot

Examining all the sample robots, it was hard to come up with a solid movement strategy. Robots such as Crazy, SpinBot, and Walls, make it hard to create a standard pattern of attack, even tracking their movements makes it difficult. When it comes to these two robots, my win/lose ratio is about split 50/50.

For Walls, the initial placement of the robots seems to play a significant role. When the robots are initially placed, sometimes my movements are exactly in-line with Walls so every time Walls fires, my robot will get hit and the converse is the same. Sometimes Walls’ movement is in-line with mine, so every time I fire, Walls will get hit.

For SpinBot, because it keeps moving in a circle, it can seemingly dodge all of my shots. Sometimes I’ll run out of energy just trying to shoot at it, or at rare occasions I’ll collide with it and I’ll fire at maximum power to take the win. Also, at times SpinBot will use up all of its energy just trying to shoot and I’ll take the win when it becomes disabled.


Here are my final results, 100 matches for each of the 8 robots:

Walls: score percent: 53; 47 wins; 53 losses
RamFire: score percent: 57; 77 wins; 23 losses
SpinBot: score percent: 53; 53 wins; 47 losses
Crazy: score percent: 68; 65 wins; 35 losses
Fire: score percent: 84; 97 wins; 3 losses
Corners: score percent: 83; 100 wins; 0 losses
Tracker: score percent: 82; 98 wins; 2 losses
SittingDuck: score percent: 100; 100 wins; 0 losses


Lessons Learned

During the creation process, I came across many contradictions. What worked to defeat one did not work against another. For predictable movements, it seemed that a blunt strategy like RamFire would be a sure win. But for robots with completely random movements, the only way to combat it was to move randomly as well.

Creating a single robot that could defeat all eight of the sample robots would be no easy feat, and by doing so would increase success in real competitive battle. However, the only way to test it out is to put it in actual battle. Talking with a few classmates, I found that I could easily defeat a specific sample robot while theirs could not, and vice versa.

When I was designing my robot, I was only thinking of survivability. Meaning, I thought that if I outlasted my opponent I would win. However, survivability is only one part of your total score. Even if a robot is defeated, it can still out rank your opponent, so even a blunt robot like RamFire can still out rank you because it will gain a lot of ram points. I will keep these things in mind when I make upgrades to StrafeNShoot.

Tuesday, September 15, 2009

Trial Version Robots: Learning Simple Behaviors

In order to create a competitive robot jutsu I need to come up with a baseline for my robot design. The robots that I have previously coded was an introduction to Robocode, in order to become familiar with the basic Robocode mechanics. For a robot to be competitive, it should have strategy and countermeasures. Although I need to look at this more in depth as I progress with Robocode, looking at the pre-packaged Robocode Sample robots will hopefully shed some light on creating a competitive robot.

In this blog entry I will examine and offer my thoughts on 8 of the sample robots, these include; Walls, RamFire, SpinBot, Crazy, Fire, Sitting Duck, Corners, and Tracker. For each robot I will evaluate criteria based on:
  1. Movement: How does the robot move? Does it have an avoidance or following strategy?
  2. Targeting: How does it find a target to fire?
  3. Firing: What is its criteria for firing?
Robot #1: Walls
Movement
Operates a very simple movement strategy that goes to the nearest wall, and just traces the wall. There is no follow strategy as this just traces the outline of the stage.The only avoidance strategy this robot exhibits is if it runs into another robot, if just reverses direction 100 pixels, and changes back to its original heading.

Targeting
Keeps gun perpendicular to the wall at all times. Only sees a target if one passes in the direction of its radar.

Firing
Fires with a power of 2 at any robot it sees.

Robot #2: RamFire
Movement
Probably the most blunt robot in movement. Rushes towards any robot it sees to point blank range in an attempt to ram it.

Targeting
This robot just rotates its whole body until it finds a target, does not go for any specific target or hold any targets in memory. Once it looses track of a target, has to swing its whole body to find another target. This is very inefficient since the robot is moving its body to scan, it will take a longer time rotating rather than just using the radar.

Firing
After it rushes to point blank range, fires the gun once it rams into another robot. The power of the shot is proportional to the enemy’s energy level. Essentially tries to out damage the target by ramming and shoot at the same time.

Robot #3: SpinBot
Movement
Spins in a clock-wise circle…that’s it. No avoidance or follow strategy whatsoever, but probably the most mesmerizing to watch. However, it does make it hard for other robots to shoot at it since it's in constant motion. Robots that are stationary have the hardest time against this robot.

Targeting
Keeps gun stationary at all times, only sees a target if one passes in the front of the radar.

Firing
Shoots at any target it sees, still going in a circle. Fires the gun as maximum power, so waiting for the gun to cool down to shoot again can be a problem if faced with multiple targets.

Robot #4: Crazy
Movement
Erratic and somewhat unpredictable movement. The robot makes a series of right and left turns as it moves ahead. If it hits a wall, reverses direction. This robot makes it hard to target as it keeps moving. I want to note that this robot is an AdvancedRobot, as coding an erratic behavior in a normal Robot would be difficult.

Targeting
Keeps gun stationary at all times, only sees a target if one passes in the front of the radar. This can be a problem if crazy is being followed as it can't see behind and even when it reverses direction, it doesn't turn around, just backs up.

Firing
Shoots at any target it sees, while in its obscure movements. Relies on the fact that a target will be seen, even while moving in obscure motions.

Robot #5: Fire
Movement
For the most part stays stationary, but if a robot gets too close, it moves away. Extremely vulnerable to robots that constantly move, as this robot stays still.

Targeting
Does not target any specific robot, just has its gun spinning all the time. This can be very inefficient as losing sight of a target will result in the robot doing a complete 360 to look for targets.

Firing
Shoots at any target it sees while spinning its gun. Depending on the range of the enemy and current energy, Fire will shoot either at maximum power (3) or just normal power of 1.

Robot #6: Sitting Duck
Movement
Does not move…at all. It may seem like it's waiting for the perfect moment to strike, but it does nothing.

Targeting
A very pacifist robot as it does not target anything.

Firing
Again with the pacifism, does not use its gun or shoot in anyway.

**The Gandhi of all sample robots, just counts the rounds and battles it’s been alive.

Robot #7: Corners
Movement
At the start of first battle, moves to the upper left corner. Stays stationary once it reaches a corner. If it died in the previous battle, will switch to another corner.

Targeting
Once in a corner, the robot continually robots the gun to scan for any targets. However, it does make use of the fact that it's in the corner. The robot does not spin it's gun a whole 360 degrees to scan for an enemy, a maximum of 90 degrees is only necessary.

Firing
Simply shoots at any robot it sees. Uses power proportional to distance, the closer the enemy, the stronger the shot. On the other hand, if current energy is low (> 15) then will use just a power of 1.

Robot #8: Tracker

Movement
Follows the first target it sees and sticks to it until itself or the target gets destroyed. If the target gets destroyed, the robot will find another. If the target is 150 pixels or more away, the robot encloses to 10 pixels. If the target is too close it will back-up. I've noticed that an "efficient" tracking method is to get within only 150 pixels of the enemy, this will keep the target in a larger radar scope without having the gun go a 360 every time it loses track because it's too close.

Targeting
At the beginning of the battle, the robot will rotate its gun until it finds a target. Tracks a single target until itself or the target gets destroyed. If the robot comes into contact with another, it will immediately make that robot its main target, and back up a little.

Firing
This robot only fires at others that have hit it.

My Thoughts

After reviewing these 8 sample robots, I sort of have an idea of different countermeasures that I could implement for simple robots. Some robots are very situational, taking RamFire as an example. In a huge brawl of multiple robots where endurance is key, RamFire is probably not the best choice to throw in, but for 1v1 it would probably perform the best.

I'm still on the fence about tracking robots. Although it seems like a cool idea to track and hunt down a robot, having a 1 track mind isn't always efficient. The really weird ones like SpinBot, Crazy, and Walls I think perform the best overall out of all the robots I've evaluated. The fact that the Corners robot stays stationary is just begging to be mass targeted.

As these robots are "sample" robots and aren't meant for competitive play, they still give you an idea of certain elements that you would want and not want in a robot.

Sunday, September 13, 2009

It's What's Inside That Matters - Even For Robots

When it comes to the battlefield, even the messiest coded robots can still win against a well organized and standardized coded robot. So why would someone even bother to create a standard let alone re-code all their robots to fit that standard?

Even if a programmer worked alone, where only he/she would be looking at source code, he/she would not remember every little detail that was changed. Quick fixes and minor updates that don't get documented properly could leave you scratching your head while you try to remember why you made the change. In a worse case scenario, you're working on a software team and you're colleagues can't figure out how or why your code works.

Taking a quote from The Elements of Java Style, "code that is written to style is predictable, robust, maintainable, supportable, and extensible." Even when programming for something like Robocode, this holds very true. Building a competitive robot requires numerous cases, for countless situations. So far, my 13 simple robots only take roughly 1 to 2 kb each and I have no doubt that a competitive one can reach anywhere between 10 to 20 kb. Although using an IDE like Eclipse to manage all code, and no matter how pretty Eclipse makes your code look (by color coding words), all it tells you is what it does, it's up to the programmer to document why it does it.

One comment I remember from my Prof. in the intro classes to ICS, and that's "comment as if the reader knows where you live." A very funny yet creepy way of remembering on how you should comment your programs, showing how if you don't properly comment your programs, they'll hunt you down. Though, creating a standard and using proper documentation allows others to easily understand the flow of not just your program, but any other. Modification becomes a breeze because you'll know what each section of your code does. It also serves as a reminder, if it's old code you probably won't remember certain details.

For this Software Engineering class, we also have our own ICS standards. Using our own ICS standards, Robocode standards, and Elements of Java Style, I have modified my simple robots to conform to these standards. Even though I had already commented my robots, after reading the aforementioned, I realized some of the documentation is wrong. It was a bit embarrassing to look through my robots only to find that what I thought was right was incorrect documentation. Some comments just said what the code was doing rather than why it was doing it, or something simple like using end-line comments were all over the place. Luckily Prof. Johnson released an XML document that has the basic formats in it that works with Eclipse. It won't properly document your code (haha), but it will give the code basic formats like proper spacing, JavaDoc format, and line lengths.

During modification, I would like to give credit to Kendyll Doi. He had created a very nice formula to augment the Robots firepower in Firing03 robot. This robot is supposed to use firepower proportional to the robots distance, so the farther away the robot, the less power it would use, and the closest, the stronger the fire.

My original formula looked liked this:
 fire(1 / e.getDistance()); 
But the battlefield is already big so no matter how close the robot is, the value of fire would almost always be the lowest.

Kendyll's take looked liked this:
 fire(3.1 - e.getDistance() / 1200 * 3); 
He worked out that with the radar's max distance at 1,200 pixels, he used a ratio between the distance of the target to 1,200. At a max distance of 1,200 the robot will shoot a minimum power of 0.1, and at close range, a power of 3.

My standardized robots can be found here.

Wednesday, September 9, 2009

Java Style: Robocode No Jutsu!

Programming robots...that is pretty much the highlight of my week. Of course, it may not be what you'll initially think. Just those two words alone makes me think of movies such as iRobot and Terminator. But alas, the types of robots I programmed were on a much smaller, more virtual scale.

This past week, and future weeks to come, I am programming in Robocode, a java based, open source game. Robocode's motto alone "Build the best, destroy the rest" (which appears on the splash page every time Robocode is opened) will put a smile on your face as you code and watch your robot take action on the battlefield.

However, I am just a beginner, so much of the enjoyment will come when I have fully grasped the ways of Robocode. For starters I have developed and completed 12 simple robots that full requirements assigned by Prof. Johnson's - Software Engineering class. These simple robots covered tracking, movement, and firing. Since Robocode is Java based, I had no problem with syntax and using Robocode's API, as it is well laid out and mirrors Java's very own API. The only difficulty is getting to know the different functions each robot has, and any Utility functions that are used to calculate headings/bearings/distances.

My completed robots can be downloaded here

Upon installing Robocode, it comes with a variety of sample robots, each complete with source code so you'll know exactly how they work. I think including sample robots AND source code is an excellent way to dissect and reverse engineer these robots. It also makes learning how to code much more easier and smoother.

Some of the difficulties I encountered was the coordinate system of the battlefield. I'm used to coding Java Applets, so the coordinate system for those start in the upper left, but the coordinate system for Robocode starts as if it's in the First Quadrant, with the origin in the lower left. This was a disorienting fact as my robots would turn in the opposite direction I wanted.

Also was how Robocode handles degrees. When I started to code for turning, I assumed Robocode used the Unit Circle, with 0 degrees pointing to the right. This proved wrong, and later realized that it works just like a clock, where 0 degrees is at the top and moves clock-wise around 360. Calculating bearing and headings and knowing the difference between these two concepts was a bit difficult. Since you can't explicitly tell your robot to move to a specific degree, you have to subtract or add degrees to get it there. Luckily, the sample robots show code on how to reposition your robot given a target.

There is one of my robots that I am concerned about, and that is my Movement05 robot, where it moves to the 4 corners of the battlefield. I managed to use some basic trigonometry to calculate the angle to turn towards each corner. However, going from the upper left to lower right corner doesn't work too well if the battlefield is not a square. The angles become too steep and the robot hits a wall about 20 pixels short of the corner.

Coding in Robocode this week alone, I learned that programming for these robots is just like programming anything else in general. You need to be very specific in your coding. Using robots you can physically see where your code is acting "funny" without using a debugger that would just tell you a very technical wording error, accompanied by the line that cause it. There are also some technical things such as getting a robot to move to a specific place on the grid, whether that be a point such as the center or a corner, or a roaming target which you have to track.

Sunday, August 30, 2009

Simple Backup, Think Areca Backup

If you ever find yourself looking for a backup tool that’s easy to use without all the nonsense of buying into proprietary hardware then you might want to try Areca Backup by aventin a try. For an average, everyday user it’s a very simplistic program to learn, and with just a few clicks, you can set individual files or multiple directories to be backed-up.

I’ve found this program on sourceforge.net, an open-source community where developers post their software for others to use/rate.

Essentially, Areca Backup is a file backup software that supports incremental, image and delta backup on local drives or FTP servers. Areca-Backup also allows you to browse your backups and navigate among different version of the files contained in your archives.


For evaluation purposes I will be using Philip Johnson’s Three Prime Directives of Java-based Open Source Software Engineering.

Prime Directive 1: I believe that this program does accomplish a useful task. However, it’s sort of a hit-miss type of deal. Today, most external storage devices come with their own type of back-up software making Areca Backup seem useless. But, as the functionality of the computer grows, so does the base hard drive. A brand-new computer with a 160 GB hard drive has lots of room to spare and normally wouldn’t come with backup software, only a recovery disc. So those that already have a large hard drive will benefit from the usefulness of this software.

Prime Directive 2: The very simple user interface of Areca Backup makes this software very attractive to use. The software itself is only 5 MBs, making download and installation very fast. The online tutorial was very helpful as it included written steps as well as screen shots. Those two facts alone more than satisfy prime directive 2. The types of backups range from basic backups, to server caliber FTP backups.

Prime Directive 3: Looking through the Areca Backup’s Documentation, it does have an API for custom plug-ins for a custom storage policy as well as a DTD template found on their website, but both somewhat vague. Also through sourceforge.net, they have a forum which developers can go to, to ask questions about modifying storage policies. For developers looking to modify this program, it’s very possible to do so, there are many resources you can check; the forums, documentation, and Areca’s homepage.

Overall, the software is very intuitive and easy to use, otherwise there’s a very helpful tutorial to point you in the right direction. It’s simplistic design and advanced features make it attractive to an average user that want to back up a few important files, or an experienced user managing multiple drives.

Fizzuous Buzzourous - Remembering Java

FizzBuzz Program Description:

Print out all of the numbers from 1 to 100, one per line, except that when the number is a multiple of 3, you print "Fizz", when a multiple of 5, you print "Buzz", and when a multiple of both 3 and 5, you print "FizzBuzz".

When I was presented with the FizzBuzz program it seemed like it was like a first semester Java type of assignment. I was already familiar with Eclipse and so I opened it and started to declare the prototype of the main method, but once I opened the curly brackets I stumped myself because I forgot how to do modulo. I could recall it involved the %, but I couldn’t remember if it was just one or two %’s. The Boolean operators of and-&& and or-|| confused me for a few seconds because those had two symbols, but then Eclipse didn't recognize %% in the condition so it just 1 %. (Just one of the marvels of using an IDE with MS Word like error lines)

Reading the problem verbatim I began to create my if-statements accordingly. As I got to the first If-statement, I started to modulo 3 right off the bat. I realized that the order of conditions cannot be taken verbatim because a number like 15 would print out “Fizz” instead of “FizzBuzz”. I successfully completed the code in about 15 minutes. My final code looked like this:

public class FizzBuzz {
/* Implementation of the FizzBuzz program
* Prints integers 1 to 100
* Multiples of 3 will print "Fizz"
* Multiples of 5 will print "Buzz"
* Multiples of 3 and 5 will print "FizzBuzz"
*/
public static void main (string [] args) {
for (int i = 1; i <= 100; i++) {
if (i % 15 == 0) {
System.out.println
("FizzBuzz");
} else if (i % 3 == 0) {
System.out.println
("Fizz");
} else if (i % 5 == 0) {
System.out.println
("Buzz");
} else {
System.out.println
(i);
}
}
}
}

However, Prof. Johnson made an interesting comment in class. Although that the above program would work correctly, how would you go about testing the program. It was hard to prove that the program worked because the solution was hard-coded into the if-statements, and there was no way to specifically test the boundary conditions. So I went back created a static method that takes an integer as a parameter, and returns a String. This way, you would be able to feed it an integer and it will return the appropriate value. The second version ended up like this, which took about 3 minutes to adapt the code from the first version:

//Version 2 of FizzBuzz Program
public class FizzBuzz2 {
/* Main method feeds integers into method myFizzBuzz
* that returns a string value depending on the integer
*/
public static void main (string [] args) {
for (int i = 1; i <= 100; i++) {
System.out.println
(myFizzBuzz(i));
}
}

/* Static method allows for individual integers to be tested,
* boundary conditions can be tested by manually feeding
* integers through main method
*/
public static string myFizzBuzz(int j) {
if (j % 15 == 0) {
return "FizzBuzz";
} else if (j % 3 == 0) {
return "Fizz";
} else if (j % 5 == 0) {
return "Buzz";
} else {
return string.valueOf(j);
}
}
}

(Note that I did use a for-loop to feed it numbers 1 to 100, but you can replace the loop with a hard coded integer to test the boundary conditions; 1, 15, and 100)