I’ve started writing some tests for Quizipedia, and I’m seeing some advantages that a test-driven approach may have had. In order to write proper tests, I’m being forced to refactor a little bit of the code. It’s not too bad, since I WAS keeping the fact that I’d need to write tests at the back of my head while I was writing the code. My main goal with this was to get SOMETHING out and working, which I was able to do. Now I get to do all the clean up. Now that I’ve scratched the itch to actually get something working, any projects I work on in the future will be test-driven (famous last words?)
Anyways, I started with the basic tests for some of the main library functions in the game (creating/getting/deleting a game and also guessing answers). I am using the chai and mocha framework to write my tests, and so far it satisfies the basic functionality that I’m looking for. Those tests were fairly straightforward, so I’ve moved on to testing the actual game-building. This includes things like scoring a given word and deciding if a word is relevant. Writing the tests allowed me to make an important refactor in the code:
Previously I had a single function that would determine the score of a word. This was a class function, since the score depended on other properties of the gameBuilder object, for example the distance to the PREVIOUS relevant word. I wanted to test the score of a word without this dependency, so I added a STATIC relevance check, which is now part of the entire relevance check. This will give greater flexibility when I integrate some more powerful word scoring into the game, so I’m glad I caught this early. I was also able to do a tiny bit of tweaking/improvements on some of the word scoring while writing the tests.
So the two library test suites are the Basic Game functionality (done) and the Game Building functionality (in progress). Once these are done, I am going to move on to writing tests for the DAL (simply sql calls => should be pretty straightforward) and then tests for the handlers (slightly more complicated).
I’m back from my trip out East and ready to get back into some existing and new projects. The PostGres Vision conference was a good opportunity to learn about a bunch of different companies that are coming up with new software using postgres. Many of the talks I went to gave an overview of some technology, followed by the presenter’s software which makes use of the technology. It wasn’t a very big conference – maybe about 200 people- so it wasn’t too overwhelming, and in that respect, it was a good place for my first tech conference.
So now that I’m back, I have some goals for July:
- Quizipedia tests: I didn’t get a chance to write the tests that I wanted to last month, so that is going to be my priority for July. I want to have some unit and integration tests around the entire codebase, and moving forward with the project, I want to get in the habit of doing test-driven development
- Quizipedia improvements: The relevant word algorithm on quizipedia is very primitive, so I want to do a bit of research on more intelligent ways of improving it. I may look at some of the language processing libraries that I came across while studying Postgres. This is primarily about research and investigation, s I likely won’t make any windfall changes to the code in this area.
- Open-source project: I’m going to shop around to see if I can find an open-source project to do some work on. I’m not locking in any theme of the project, but I am going to be looking especially hard for something related to AI. These days, saying that seems like a vague statement, so I may start with some areas that I have a bit of experience in, such as constraint satisfaction problems, writing Prolog, and motion planning.
- Books: I’ve been reading quite a few software development books lately, and I’d like to write some reviews/summaries for them.
Today I got through the postgres chapter on indexes. I actually thought there would be a lot more to indexes, but it turns out I’m already fairly familiar with the important details. There are some index implementations of postgres that I was totally unfamiliar with (GiST, GIN, BRIN), but the B-tree is easily the most important implementation as far as I’m currently concerned.
There were a few enlightening lessons, such as the important of ordering your indexes properly, and the concept of partial indexes, but for the most part this ended up being a formalization and review.
Today I was able to add the client-side views and connections to the quizipedia app. There are two simple views:
- The game creation form: Enter a game name and the text to be parsed into a quiz.
- The game itself: This is where the blanked out game and interaction takes place
As with the server-side code, I had written quite a bit of this code previously in another repo, so it was matter of getting it organized and hooked up with the rest of the new system. So it only took me one session to get all that done. And at this point, I’ve pretty much used up all my previously-written code, so all the next stuff is going to be new for me, and will likely take longer.
So the main connections are made, and it is working well both locally and on production! So quite a satisfying day, but still lots of work to do. Over the next day or two, I’m going to get as much of the following done as I can:
- Add in logical error-handling and logging
- Style the client-side so it looks nice
- Providing more feedback on correct/incorrect answers to the user
- Add tests for the handlers
- General code cleanup: comments, proper conventions, consistency, etc.
- DB maintenance – maybe automate some scripts to clean up the db every so often
Once I finish a few of those tasks, I’m going to be re-focusing my goals for the month of June, which I will discuss in a later post.
I finished up most of the server-side code for my quizipedia web application today. It was fairly easy to write, since I had a few repos of sandbox code that had at least 75% of the main functionality, so it was just a matter of cleaning it up and putting it all together.
Here’s what I’ve accomplished in the past two sessions of working on it:
- Set up a Heroku project using the node-related tutorial
- Heroku makes it very easy to actually deploy something and allows you to have an easy testing environment, using a local testing database
- Set up a postgres database (local and prod)
- To add a PG db I just had to “provision” the Heroku PostGres add-on, which is straightforward and included in the node tutorial
- My main issue was that I was trying to connect my local environment to the production database that Heroku sets up for you. It took me too long to realize that I was supposed to be using a LOCAL db, which I quickly set up
- I think it will be important to keep a change log of any db updates that I do. When making changes locally, I can just wipe the db and re-run the script I’ve saved. But this isn’t going to work on prod, where the existence of the data depends on me NOT destroying the tables. I have some experience using a migrations package, which allows you to create a new ‘migration’ file anytime you need to make a change, for example adding a table column. This will be essential for keeping track of all my changes. I’m not going to add it in right now, as my focus is to get something out and working.
- Added the main handlers. This is a RESTful application, so the primary handlers right now include:
- POST /game: This is where a user would post some text to be formatted into a quiz. Eventually I would like to have them just send a link to a webpage with the data, do the parsing, and create the quiz that way.
- GET /game/:gameId: This is going to fetch the game information by the id generated in the saving functionality. It will return the blanked out game. Further parsing will be done on the client-side
- POST /guess: This is is a call to allow the user to submit a guess for one of the blanks. Their guess will be verified against the true answer. If it is correct, it will be saved accordingly.
- Connected the handlers to the DB via a function library and a data layer.
- The data layer is essentially an active record setup, where the models reflect the columns in the database. I have three tables, so I have 3 dal classes.
- The library is the business logic layer, where the relevant word algorithm gets used, and the objects get converted to the appropriate formats
- Building the game is a large job, so I created a completely separate ‘gameBuilder’ helper class which is called by the main ‘game’ library class.
So I’m happy with this initial progress. Again, a lot of it was just organizing some existing code in some unorganized repos into something workable and organized.
My next big step is going to be to add the client-side handling. This will mainly involve submitting a new game text, presenting the constructed game, and filling out correct answers as needed.
Some simple, yet sometimes-forgettable practices to remember:
- 4 attributes of good code: Maintainable, efficient, usable, dependable
- Establish programming conventions before beginning
- Make code portable. Don’t hard-code values
- SPOD: Single Point of Definition
- Be pro-active with tests. Use Test-Driven Development
- Deployment considerations:
- Keep only what is needed – nothing unused
- Have a roll-back strategy
- Use automation for repeatable processes – eliminates human error
- Traditional Waterfall approach
- Requirements, Design, write code, subsystem testing, system testing
- The assumption is that this process is rigidly consecutive, e.g. Once requirements are done, they will never change. This does NOT reflect reality
- General problems:
- Poor code quality
- Inaccurate understanding of user needs
- Inability to deal with changing requirements
- Specific problems:
- Ambiguous communication
- Insufficient testing
- Bad requirement management
- Undetected inconsistencies in requirements
- Best practices
- Develop iteratively
- Resolve critical risk before making large investment
- Early iterations promote early feedback
- Manage requirements
- Expect them to change
- Agree with user on the WHAT, not the HOW
- Prioritize the requirement changes
- Component architecture
- Permits re-use
- Allows use of commercial software and APIs
- Easy to break down tasks for a team
- Improves maintainability and extensibility
- Model software visually
- Promotes unambiguous communication
- Can hide/expose details in accordance with the requirements