VirtualTam's bookmarks

    1. Understand user needs
    2. Do ongoing user research
    3. Have a multidisciplinary team
    4. Use agile methods
    5. Iterate and improve frequently
    6. Evaluate tools and systems
    7. Understand security and privacy issues
    8. Make all new source code open
    9. Use open standards and common platforms
    10. Test the end-to-end service
    11. Make a plan for being offline
    12. Make sure users succeed first time
    13. Make the user experience consistent with GOV.UK
    14. Encourage everyone to use the digital service
    15. Collect performance data
    16. Identify performance indicators
    17. Report performance data on the Performance Platform
    18. Test with the minister
  1. Pro: no need to setup a DNS server to test virtualhosts Con: keep in mind that all "fake" hosts will point to 127.0.0.1!

    1. Use /etc/hosts to declare test hosts / domains / subdomains
    #<ip-address>	<hostname.domain.org>	<hostname>
    127.0.0.1	localhost.localdomain	localhost
    127.0.0.1	host.localdomain	host
    127.0.0.1	sub.host.localdomain	sub.host
    ::1		localhost.localdomain	localhost
    
    1. Allow per-user virtualhost definition in either (depending on your distro)
    • /etc/httpd/conf/httpd.conf
    • /etc/apache2/apache2.conf

    Include /home/albert/.httpd/*.conf

    1. Profit! Create virtualhosts with local hostnames :)
  2. "Human collective behavior can vary from calm to panicked depending on social context. Using videos publicly available online, we study the highly energized collective motion of attendees at heavy metal concerts. We find these extreme social gatherings generate similarly extreme behaviors: a disordered gas-like state called a mosh pit and an ordered vortex-like state called a circle pit. Both phenomena are reproduced in flocking simulations demonstrating that human collective behavior is consistent with the predictions of simplified models."

  3. Python's built-in unittest module is quite cool, but a bit limited and way too verbose (read: it's quite not easy to incite developers to write unit tests)

    I'm currently looking for more dev-friendly solutions, the key points being:

    • writing test code should be easy and straight-forward -keep the focus on "what to test" instead of "how to transcribe a process to a test"
    • parallelization! -we, spoiled developers, should make good use of our way-too-many-cores build machines...
    • complete feature set!
      • we don't want to just run tests...
      • coverage reports (find dead/weak/untested code sections)
      • output formatting (JUnit-XML seems to be quite a common format out there)

    There seem to be 3 solutions in Python:

    • stock unittest + project-dependent customizations / test helpers
    • nosetests
    • py.test

    And 2 ways of gettings things done:

    • keeping things stock: no external dependency, project-specific implementation...
    • using a test framework: one more module in your (test) virtualenv, more concise tests, more features (// run, code coverage, etc.)

    Some links:

  4. Uses a project or repository's history to plot user contributions, displaying an elegant, colored graph of the file arborescence.

    After running it on quite different projects...

    • Python/Bash CI/Jenkins scripts
    • Qt apps: GoldenDict, Psi+
    • PHP website: Shaarli

    ...watching some vids on teh intartubez:

    It allows to arbitrary spot some interesting implementation aspects (sorted by descending impact):

    • language-dependent trees (oh hai Java packages ^^)
    • framework-dependent trees
    • project-management method (none, Agile, TDD)

    Having a graphical tool also quickly shows:

    • the overall structure of the project (a bit cooler than a simple $ tree, way quicker than loading the project on an IDE)
    • the repartition of files (by extensions)
    • who are the most active contributors
    • what are the most modified files over time
    • who does what: additions, deletions, refactoring

    Some more CI-related matters:

    • are there any tests?
    • what is the source code / test code ratio? (we could expect a project/lib with N modules to have at least N test modules)
    • who initiates / implements / optimizes test code?