Saturday, December 29, 2012

Pymisc module

    Pymisc - is module for miscellaneous utilities for your average python scripts and projects.
    This module was developed with same idea as "django-misc" that I've described before - to move utilities that are used frequently to specific location.
    To get it installed you can use GitHub (latest) version or PyPi (stable) version by installing via pip:
pip install git+git://
or for stable version from PyPI:
pip install pymisc
    Now, when it's installed on your machine, let's discuss what you can get from it:
  • contains @logprint (enter and exit from function will be logged, as well as crashs that may happend) and @memorized (cachine decorator)
  • contains Settings class that provide near django.conf.settings experience  and additionally you actually can change values and they will be auto-saved when application closes.
  • utils package contains a long list of routines for different purposes, which I'll describe in github documentation one day
  • reader package contains couple csv utility modules that really when work hard with this format of data files
  • django and html are actually copies of django-misc stuff, so if you use it already - just ignore it
  • web.browser.Browser - is a class that provides some basic routines on top of usual urllib module to allow easier do json requests, download files and etc.
   I'll continue developing this module and adding more stuff (including some doc and examples), and if you have a piece of code that you thinks belongs in this kind of place - let me know or fork&pull-request on GitHub.

Friday, December 28, 2012

Release cycle - Part II

      This is additional part to previous post about release cycle.

      So I continued to read +Joel Spolsky and guess what? He actually ended up with Kanban ideas in mind in one of his latest posts. Apparently I should read all Joel's stuff before commenting :)

      Basic idea behind "pull"-kind of scheduling system (as I understand it), is getting features to customer as fast as it possible. Now, saying this - your release cycle should be getting faster and faster. And for this, of cause, you will need all this modern tools (TDD and unit-tests, DVCS and etc), which will allow to push you changes, that are ready for deployment, to production server and know that feature works and has high quality.

      By making "pull" system, you'll illuminate most of "waste" inventory and will be able to show customers same features you use "in-house". Imagine, you have configured you Continues what-ever system (Delivery or Deployment) - system that allows to get code that works through test-system to the production (web service or auto-update server). But this is only part of job for "pull" system.

      You also need to have police to choose features to develop only based on "highest" priority. This means, that if you have a mile long backlog, you will need to sort it by priority (number of clients asking for feature and\or $$ additionally gained) and acknowledge to yourself, that low priority features won't be done... ever. You'll always get new ideas for features from clients or from team - that will go with high priority. So don't spend time to revise them again - throw them out. If any of that features was really important - it will come up again.

     Ok, that's all fun when you have perfect code, it's all separated by modules and it's all covered by unit-tests and etc. But let's look truth in the eyes - code is a mess and tests are inferior. And this means that refactoring is required, while we want to push new features out (customers are waiting and nobody wants to disappoint them).

     To get refactoring going - let's make in a backlog a feature with concreate proposition how to refactor parts of code (not just - "rework this", but concreate "this should have interface A, B, implement algorithm C") and set a high priority to it. Then when current features are done, one by one refactoring tasks will be performed by developers. Proposition for refactoring should have description how new code should be tested (unit-tests, integration-tests, system-tests, performance-tests) to ensure that it works correctly. If new high priority feature gets into play - it will be after some of refactoring tasks, that already got in the queues of developers.

    While code is still imperfect - release cycle will be maintained not continues, but it should be minimized. From years to month as first steps. When new minor version will be released each month - next step go to weeks - just a couple of features (sometimes small and sometimes rather large). While this - refactoring should be going to get code to better shape and unlock possibility to deliver software continuously.

Wednesday, December 26, 2012

Release cycle

      And again let's start the topic from +Joel Spolsky post about "Picking a Ship Date". His main idea in the article is:
  • If product is brand new - release often
  • If product have maturity - 1 year - 2 years release cycle is for you
  • If product is platform - at least 3 years.
      Ok, the article is from 2002 (what? yep, that old). So there was no distributed version control system (no, really, first releases of Git or Mercurial were at April, 2005). 

      Now, how did it changed the world? It actually changed it pretty dramatically. Before the main workflow was to develop a set of big features and then go to "pre-release cycle", when you fix issues, add small features and get your QA cracking your software. Because of that if you have 1 year release cycle, you would have only 3-5 month to get big features to the system, and then you have this cycle of getting software to production form.

      Behold, DVCS gave you an option to have master branch in "release" form all the time. If you are doing a work that requires more than one commit (which any normal work does - because you should commit frequently) you just put in separate branch, which will be merged when it's ready (and may be even tested by QA). When in old world you would just make you current state unreleasable by pushing your commit to trunk (ok, there was branches in SVN... but really, did somebody used them to make new features?).

     Let's look at Github - they've made 2000+ releases at 6 month period. Some of them sure were pretty small - a bug fix, a simple button that million customers asked to add or just a tooltip that made better usability. But some of them were pretty big - new interface, or changing backend, or may be new system of distributed handling of repositories (I'm making this up, though). The idea is - each release were the same - somebody finished his work in separate branch and merged it to main (release) branch.

    Another example is Google Chrome - I think it's the best update system in the world that is in this world. IT DOESN'T NEED ME TO DO ANYTHING (I'm looking at you Flash, Java, ...)! Yes, and I like it. And they managed over past 4 years I'm using it - no to change interface unexpectedly - so the argument of "too frequent releases will affect usability" doesn't sound if the development process is built with usability in mind.

     As a conclusion I would say, that frequent releases are a good thing when update system is very good (web service or auto-update without person even noticing) and when your have very concreate plan of features that will be implemented and not break usability (features add functionality, not complexity of UI). 

Monday, December 24, 2012

Management Team

      Read today guest post form +Joel Spolsky at - 
Management Team. The main idea is to get your developers (QAs, etc) do their job how they think is right (because they have more knowledge in it) and not micro manage them.

      Idea sounds reasonable, but I have some doubts in situations when people don't have good interperson skills.
      Let's see an example with Pet and Jack. They are both senior developers in a team that develops a new version of super product X. They have a product manager who made awesome functional spec and now they are discussing an implementation. Pet thinks that implementing with B-trees will be better, when Jack really likes Red-Black trees. Now, they have a pretty straightforward way to figure out who is right - kick the code in and compare which solution is better for the particular situation. Easy, right?

     Wait... what if you need now to choose if you want use library A or library B? And you can't easily anticipate future problems (ops, library A had an issue on HP-UX when you have MySQL demon running), neither you can implement fast enough both solutions and test which one is better.

     Even worth, if library A actually a full blown framework for solving problems, and you just need to customize it, and library B is a set of functions which you can use, but you need to develop a wrapper around it to make it work. I mean, when you can't get an abstraction level between your code and a library to decide later which one you wanna use.

     Pet and Jack will be arguing wich library is better to use, and because Jack doesn't actually like to speak much (yep, he is better expressing himself in code), he decides to give up and agree with Pet to use library A.

     Now, you see +Joel Spolsky has a Team Lead on the chart, which presumably should solve arguments like this. He is taking responsibility for making large design decisions, selecting tools and setting up conventions for his team. For this, Team Lead should have an experience with a lot of things and be very good in judging what will be better for a team and further development

     Returning to our example, Pet got promoted to Team Lead, because the guy who was Team Lead before was caught sleeping with CEOs secretary and got kicked off. Pet was chosen because he has better "people" skills and is very knowledgable about product team developing. O yeah, and he is pretty good in beer-pong (Jack didn't go to that party, so we don't know if he is better then Pet).
    Pet got a request to implement new feature for next version and sat down with Jack to design. Of cause Jack has some ideas about better design - but he already argued once with Pet and got kicked. Plus, Pet now his boss. So he listens to Pet's ideas, which are mostly good and even if there are some not-so-good design decisions - he will just agree.

    So in result, we see that people skills actually worth not just promotion, but who's ideas will be implemented.

Sunday, December 23, 2012

Functional and Technical spec in software design

      I started reading +Joel Spolsky's blog pretty heavily. I'm reading his old posts from 2000s. I'm pretty sure most of you are familiar with his blog - it's kinda famous in software development world.
I'm reading 3-5 posts a day. I don't agree with some of this thoughts (he didn't agree with some of them too as time goes :) ), but most of them are pretty bright.

      One of the ideas, that I'm trying to employ is technical writing - guess what, this blog was made for this - to practice writing in English on technical topics. But because of my laziness I wasn't doing much here. So that's should change in next month or so.

      Another thing, that I'm trying to get used to - is functional specs, as he calls them. A document that describes feature - essentially document that will help all people involved in software development process to get understanding how feature should work and what should be done for this.

      But functional spec, as Joel points out - is view from user's point. And it can be written by program manager - person who is not a developer, but more list a marketing\product development kind of person.

      On the other hand, in complicated situations - like developing new product or producing a large feature (more like a feature set) - when implementation is not clear - there should be a technical spec. Or functional spec should incorporate this information. 

      The purpose of that is to think about design/implementation and future obstacles:

A software design is simply one of several ways to force yourself to think through the entire problem before attempting to solve it. Skilled programmers use different techniques to this end: some write a first version and throw it away, some write extensive manual pages or design documents, others fill out a code template where every requirement is identified and assigned to a specific function or comment. For example, in Berkeley DB, we created a complete set of Unix-style manual pages for the access methods and underlying components before writing any code. Regardless of the technique used, it's difficult to think clearly about program architecture after code debugging begins, not to mention that large architectural changes often waste previous debugging effort. Software architecture requires a different mind set from debugging code, and the architecture you have when you begin debugging is usually the architecture you'll deliver in that release.
      This is Design Lesson 2 from history of Berkeley DB. Check it out - nice article about history of development pretty complicated system.

Sunday, October 21, 2012

Developemnt environment - Part I

Development environment - Part I

The objective of this series of posts, is to figure out an environment for developing cross-platform multi-targeted (desktop and web, SaaS, front-end and back-end parts) and multi-lingual (C++, Python, Java, JS) applications using TDD as quality control and Continues Delivery to release faster and more frequently. This post is mostly for my own reference, but you may find it useful as well (or comment on what I did wrong :) ). I hope that this series of posts will initiate a discussion on this topic, where everybody will participate to figure out best layout.

In this first post, I want to describe requirements in deep, and then step by step in next posts build a system that will satisfy them.

So, let’s again describe requirements, that environment should satisfy:
1) Software will be cross-platform: Windows, Linux, FreeBSD, Mac OS, HP-UX, AIX, iOS, Android.
This also adds additional plane for thinking - people want to use best IDEs that are available in concrete platforms. For example for development in C++ at Windows - Visual Studio will be the best choice, and developers wouldn’t want to downshift to Eclispe or other cross-platform, but less developer-friendly systems.
2) Software will be multi-targeted: application will have back-end and front-end sides, each of which can be run on multiple platforms - back-end may be running on desktop, server or cloud, when different front-ends will be running on desktops and in the web.
3) Multi-lingual: Lowest level of back-end will be running on C++ to achieve best efficiency, when higher levels of back-end may run on Python (faster development, less code). Front-end may be containing Java and\or JS (desktop and web).
4) System should be extensively tested on regular bases (best if after each commit) by unit-tests, integration tests, performance tests.
5) System should provide Continues Delivery, that will allow to put a feature to production at the same day as it was finished (passed all tests as well) and allow users give a feedback about it. But if one of new features did brake build - this shouldn’t stop other new features from be released.
6) Environment should have outline behaviour for teams of different size and on different stage of development. (Example, at the beginning when couple people may work on same component, and they should work in the same branch, on the other hand at feature-creep stage - each person works on separate small\medium size feature and should work in own branch).

If we go for more detailed numbering of IDEs on each platform:

  • Windows: 
    • Visual Studio for C++;
    • Notepad++ for Python, JS;
    • Vim for C++, Java, Python, JS;
    • Eclipse for C++, Java, Python, JS;
    • QtCreator for C++.
  • Linux: 
    • Vim for C++, Java, Python, JS; 
    • Eclipse for C++, Java, Python, JS;
    • QtCreator for C++.
  • Mac OS: 
    • Vim for C++, Python, JS; 
    • Eclipse for C++, Java, Python, JS; 
    • XCode for C++;
    • QtCreator for C++

As we see, Eclipse is pretty universal IDE, as well as vim. But developers are different in nature, and some want to get the best tools and development experience money can buy on each platform - i.e Visual Studio for Windows, XCode for Mac OS. Others, will tolerate “common” denominator and would want to work in universal development environment. Some, will enjoy using QtCreator which is cross-platform and pretty good for C++.
In result, we want our solutions\projects to be generated from single source to multiple destinations (VS projects, Eclipise projects, and support QtCreator) and be able to work with plain make system as well.

Quality of build is very important especially if Continues Delivery required. To ensure that the fresh build is qualitative, next activities should be performed:

  • Static code analysis to straighten out code and prevent issues.
  • Unit tests (each piece of functionality should be tested; should be fast and not involve any external components or services (no filesystems, databases etc))
  • Component tests (each component should be tested as combination of units)
  • System tests (multiple comonents working together)
  • Functional tests (testing functionality from user’s perspective\user stories)
  • Load and performance tests (to ensure speed and stability)
  • Issues and bugs should be translated to appropriate type of tests.
  • Code-coverage to ensure that functionality is tested
  • Deployment to staging environment, where system can be tested by QA or by subset of customers.

List of performed activities can be increased if needed. As well as requirements will be more detailed over time.
If somebody see that I missed something - please leave a comment.

UPD. QtCreator added from Vlad's comment.