VirtualTam's bookmarks
40 bookmarks found
-
UK Government - Digital Service Standard
2016-08-16 - Understand user needs
- Do ongoing user research
- Have a multidisciplinary team
- Use agile methods
- Iterate and improve frequently
- Evaluate tools and systems
- Understand security and privacy issues
- Make all new source code open
- Use open standards and common platforms
- Test the end-to-end service
- Make a plan for being offline
- Make sure users succeed first time
- Make the user experience consistent with GOV.UK
- Encourage everyone to use the digital service
- Collect performance data
- Identify performance indicators
- Report performance data on the Performance Platform
- Test with the minister
-
Conan | C/C++ Open Source Package Manager
2016-08-15 -
1# from a virtualenv 2$ pip list --outdated
-
Pro: no need to setup a DNS server to test virtualhosts Con: keep in mind that all "fake" hosts will point to 127.0.0.1!
- Use /etc/hosts to declare test hosts / domains / subdomains
#<ip-address> <hostname.domain.org> <hostname> 127.0.0.1 localhost.localdomain localhost 127.0.0.1 host.localdomain host 127.0.0.1 sub.host.localdomain sub.host ::1 localhost.localdomain localhost
- Allow per-user virtualhost definition in either (depending on your distro)
/etc/httpd/conf/httpd.conf
/etc/apache2/apache2.conf
Include
/home/albert/.httpd/*.conf
- Profit! Create virtualhosts with local hostnames :)
-
"Human collective behavior can vary from calm to panicked depending on social context. Using videos publicly available online, we study the highly energized collective motion of attendees at heavy metal concerts. We find these extreme social gatherings generate similarly extreme behaviors: a disordered gas-like state called a mosh pit and an ordered vortex-like state called a circle pit. Both phenomena are reproduced in flocking simulations demonstrating that human collective behavior is consistent with the predictions of simplified models."
-
Python unit testing frameworks: Nose, Pytest
2015-02-13 Python's built-in unittest module is quite cool, but a bit limited and way too verbose (read: it's quite not easy to incite developers to write unit tests)
I'm currently looking for more dev-friendly solutions, the key points being:
- writing test code should be easy and straight-forward -keep the focus on "what to test" instead of "how to transcribe a process to a test"
- parallelization! -we, spoiled developers, should make good use of our way-too-many-cores build machines...
- complete feature set!
- we don't want to just run tests...
- coverage reports (find dead/weak/untested code sections)
- output formatting (JUnit-XML seems to be quite a common format out there)
There seem to be 3 solutions in Python:
- stock unittest + project-dependent customizations / test helpers
- nosetests
- py.test
And 2 ways of gettings things done:
- keeping things stock: no external dependency, project-specific implementation...
- using a test framework: one more module in your (test) virtualenv, more concise tests, more features (// run, code coverage, etc.)
Some links:
-
Uses a project or repository's history to plot user contributions, displaying an elegant, colored graph of the file arborescence.
After running it on quite different projects...
- Python/Bash CI/Jenkins scripts
- Qt apps: GoldenDict, Psi+
- PHP website: Shaarli
...watching some vids on teh intartubez:
- Minecraft: https://www.youtube.com/watch?v=zRjTyRly5WA
- Linux kernel: https://www.youtube.com/watch?v=AhDiYPLo3p4
- Python: https://www.youtube.com/watch?v=cNBtDstOTmA
It allows to arbitrary spot some interesting implementation aspects (sorted by descending impact):
- language-dependent trees (oh hai Java packages ^^)
- framework-dependent trees
- project-management method (none, Agile, TDD)
Having a graphical tool also quickly shows:
- the overall structure of the project (a bit cooler than a simple $ tree, way quicker than loading the project on an IDE)
- the repartition of files (by extensions)
- who are the most active contributors
- what are the most modified files over time
- who does what: additions, deletions, refactoring
Some more CI-related matters:
- are there any tests?
- what is the source code / test code ratio? (we could expect a project/lib with N modules to have at least N test modules)
- who initiates / implements / optimizes test code?