Saturday, December 29, 2012

Pymisc module

    Pymisc - is module for miscellaneous utilities for your average python scripts and projects.
    This module was developed with same idea as "django-misc" that I've described before - to move utilities that are used frequently to specific location.
    To get it installed you can use GitHub (latest) version or PyPi (stable) version by installing via pip:
pip install git+git://github.com/ilblackdragon/pymisc.git
or for stable version from PyPI:
pip install pymisc
    Now, when it's installed on your machine, let's discuss what you can get from it:
  • decorators.py contains @logprint (enter and exit from function will be logged, as well as crashs that may happend) and @memorized (cachine decorator)
  • settings.py contains Settings class that provide near django.conf.settings experience  and additionally you actually can change values and they will be auto-saved when application closes.
  • utils package contains a long list of routines for different purposes, which I'll describe in github documentation one day
  • reader package contains couple csv utility modules that really when work hard with this format of data files
  • django and html are actually copies of django-misc stuff, so if you use it already - just ignore it
  • web.browser.Browser - is a class that provides some basic routines on top of usual urllib module to allow easier do json requests, download files and etc.
   I'll continue developing this module and adding more stuff (including some doc and examples), and if you have a piece of code that you thinks belongs in this kind of place - let me know or fork&pull-request on GitHub.

Friday, December 28, 2012

Release cycle - Part II

      This is additional part to previous post about release cycle.

      So I continued to read +Joel Spolsky and guess what? He actually ended up with Kanban ideas in mind in one of his latest posts. Apparently I should read all Joel's stuff before commenting :)

      Basic idea behind "pull"-kind of scheduling system (as I understand it), is getting features to customer as fast as it possible. Now, saying this - your release cycle should be getting faster and faster. And for this, of cause, you will need all this modern tools (TDD and unit-tests, DVCS and etc), which will allow to push you changes, that are ready for deployment, to production server and know that feature works and has high quality.

      By making "pull" system, you'll illuminate most of "waste" inventory and will be able to show customers same features you use "in-house". Imagine, you have configured you Continues what-ever system (Delivery or Deployment) - system that allows to get code that works through test-system to the production (web service or auto-update server). But this is only part of job for "pull" system.

      You also need to have police to choose features to develop only based on "highest" priority. This means, that if you have a mile long backlog, you will need to sort it by priority (number of clients asking for feature and\or $$ additionally gained) and acknowledge to yourself, that low priority features won't be done... ever. You'll always get new ideas for features from clients or from team - that will go with high priority. So don't spend time to revise them again - throw them out. If any of that features was really important - it will come up again.

     Ok, that's all fun when you have perfect code, it's all separated by modules and it's all covered by unit-tests and etc. But let's look truth in the eyes - code is a mess and tests are inferior. And this means that refactoring is required, while we want to push new features out (customers are waiting and nobody wants to disappoint them).

     To get refactoring going - let's make in a backlog a feature with concreate proposition how to refactor parts of code (not just - "rework this", but concreate "this should have interface A, B, implement algorithm C") and set a high priority to it. Then when current features are done, one by one refactoring tasks will be performed by developers. Proposition for refactoring should have description how new code should be tested (unit-tests, integration-tests, system-tests, performance-tests) to ensure that it works correctly. If new high priority feature gets into play - it will be after some of refactoring tasks, that already got in the queues of developers.

    While code is still imperfect - release cycle will be maintained not continues, but it should be minimized. From years to month as first steps. When new minor version will be released each month - next step go to weeks - just a couple of features (sometimes small and sometimes rather large). While this - refactoring should be going to get code to better shape and unlock possibility to deliver software continuously.

Wednesday, December 26, 2012

Release cycle

      And again let's start the topic from +Joel Spolsky post about "Picking a Ship Date". His main idea in the article is:
  • If product is brand new - release often
  • If product have maturity - 1 year - 2 years release cycle is for you
  • If product is platform - at least 3 years.
      Ok, the article is from 2002 (what? yep, that old). So there was no distributed version control system (no, really, first releases of Git or Mercurial were at April, 2005). 

      Now, how did it changed the world? It actually changed it pretty dramatically. Before the main workflow was to develop a set of big features and then go to "pre-release cycle", when you fix issues, add small features and get your QA cracking your software. Because of that if you have 1 year release cycle, you would have only 3-5 month to get big features to the system, and then you have this cycle of getting software to production form.

      Behold, DVCS gave you an option to have master branch in "release" form all the time. If you are doing a work that requires more than one commit (which any normal work does - because you should commit frequently) you just put in separate branch, which will be merged when it's ready (and may be even tested by QA). When in old world you would just make you current state unreleasable by pushing your commit to trunk (ok, there was branches in SVN... but really, did somebody used them to make new features?).

     Let's look at Github - they've made 2000+ releases at 6 month period. Some of them sure were pretty small - a bug fix, a simple button that million customers asked to add or just a tooltip that made better usability. But some of them were pretty big - new interface, or changing backend, or may be new system of distributed handling of repositories (I'm making this up, though). The idea is - each release were the same - somebody finished his work in separate branch and merged it to main (release) branch.

    Another example is Google Chrome - I think it's the best update system in the world that is in this world. IT DOESN'T NEED ME TO DO ANYTHING (I'm looking at you Flash, Java, ...)! Yes, and I like it. And they managed over past 4 years I'm using it - no to change interface unexpectedly - so the argument of "too frequent releases will affect usability" doesn't sound if the development process is built with usability in mind.

     As a conclusion I would say, that frequent releases are a good thing when update system is very good (web service or auto-update without person even noticing) and when your have very concreate plan of features that will be implemented and not break usability (features add functionality, not complexity of UI). 

Monday, December 24, 2012

Management Team


      Read today guest post form +Joel Spolsky at avc.com - 
Management Team. The main idea is to get your developers (QAs, etc) do their job how they think is right (because they have more knowledge in it) and not micro manage them.

      Idea sounds reasonable, but I have some doubts in situations when people don't have good interperson skills.
   
      Let's see an example with Pet and Jack. They are both senior developers in a team that develops a new version of super product X. They have a product manager who made awesome functional spec and now they are discussing an implementation. Pet thinks that implementing with B-trees will be better, when Jack really likes Red-Black trees. Now, they have a pretty straightforward way to figure out who is right - kick the code in and compare which solution is better for the particular situation. Easy, right?

     Wait... what if you need now to choose if you want use library A or library B? And you can't easily anticipate future problems (ops, library A had an issue on HP-UX when you have MySQL demon running), neither you can implement fast enough both solutions and test which one is better.

     Even worth, if library A actually a full blown framework for solving problems, and you just need to customize it, and library B is a set of functions which you can use, but you need to develop a wrapper around it to make it work. I mean, when you can't get an abstraction level between your code and a library to decide later which one you wanna use.

     Pet and Jack will be arguing wich library is better to use, and because Jack doesn't actually like to speak much (yep, he is better expressing himself in code), he decides to give up and agree with Pet to use library A.

     Now, you see +Joel Spolsky has a Team Lead on the chart, which presumably should solve arguments like this. He is taking responsibility for making large design decisions, selecting tools and setting up conventions for his team. For this, Team Lead should have an experience with a lot of things and be very good in judging what will be better for a team and further development

     Returning to our example, Pet got promoted to Team Lead, because the guy who was Team Lead before was caught sleeping with CEOs secretary and got kicked off. Pet was chosen because he has better "people" skills and is very knowledgable about product team developing. O yeah, and he is pretty good in beer-pong (Jack didn't go to that party, so we don't know if he is better then Pet).
   
    Pet got a request to implement new feature for next version and sat down with Jack to design. Of cause Jack has some ideas about better design - but he already argued once with Pet and got kicked. Plus, Pet now his boss. So he listens to Pet's ideas, which are mostly good and even if there are some not-so-good design decisions - he will just agree.

    So in result, we see that people skills actually worth not just promotion, but who's ideas will be implemented.

Sunday, December 23, 2012

Functional and Technical spec in software design

      I started reading +Joel Spolsky's blog pretty heavily. I'm reading his old posts from 2000s. I'm pretty sure most of you are familiar with his blog - it's kinda famous in software development world.
I'm reading 3-5 posts a day. I don't agree with some of this thoughts (he didn't agree with some of them too as time goes :) ), but most of them are pretty bright.

      One of the ideas, that I'm trying to employ is technical writing - guess what, this blog was made for this - to practice writing in English on technical topics. But because of my laziness I wasn't doing much here. So that's should change in next month or so.

      Another thing, that I'm trying to get used to - is functional specs, as he calls them. A document that describes feature - essentially document that will help all people involved in software development process to get understanding how feature should work and what should be done for this.

      But functional spec, as Joel points out - is view from user's point. And it can be written by program manager - person who is not a developer, but more list a marketing\product development kind of person.

      On the other hand, in complicated situations - like developing new product or producing a large feature (more like a feature set) - when implementation is not clear - there should be a technical spec. Or functional spec should incorporate this information. 

      The purpose of that is to think about design/implementation and future obstacles:

A software design is simply one of several ways to force yourself to think through the entire problem before attempting to solve it. Skilled programmers use different techniques to this end: some write a first version and throw it away, some write extensive manual pages or design documents, others fill out a code template where every requirement is identified and assigned to a specific function or comment. For example, in Berkeley DB, we created a complete set of Unix-style manual pages for the access methods and underlying components before writing any code. Regardless of the technique used, it's difficult to think clearly about program architecture after code debugging begins, not to mention that large architectural changes often waste previous debugging effort. Software architecture requires a different mind set from debugging code, and the architecture you have when you begin debugging is usually the architecture you'll deliver in that release.
      This is Design Lesson 2 from history of Berkeley DB. Check it out - nice article about history of development pretty complicated system.