The roads I take...

KaiRo's weBlog

October 2016

Popular tags: Mozilla, SeaMonkey, L10n, Status, Firefox

Used languages: English, German


October 2016

September 2016

May 2016


October 3rd, 2016

The Neverending Question of Login Systems

I put a lot of work into my content management system in the last week(s), first because I had the time to work on some ongoing backend rework/improvements (after some design improvements on this blog site and my main site) but then to tackle an issue that has been lingering for a while: the handling of logins for users.

When I first created the system (about 13 years ago), I did put simple user and password input fields into place and yes, I didn't know better (just like many people designing small sites probably did and maybe still do) and made a few mistakes there like storing passwords without enough security precautions or sending them in plaintext to people via email (I know, causes a WTF moment in even myself nowadays but back then I didn't know better).

And I was very happy when the seemingly right solution for this came along: Have really trustworthy people who know how to deal with it store the passwords and delegate the login process to them - ideally in a decentralized way. In other words, I cheered for Mozilla Persona (or the BrowserID protocol) and integrated my code with that system (about 5 years ago), switching most of my small sites in this content management system over to it fully.

Yay, no need to make my system store and handles passwords in a really safe and secure way as it didn't need to store passwords any more at all! Everything is awesome, let's tackle other issues. Or so I thought. But, if you haven't heard of that, Persona is being shut down on November 30, 2016. Bummer.

So what were the alternatives for my small websites?

Well, I could go back to handling passwords myself, with a lot of research into actually secure practices and a lot of coding to get things right, and probably quite a bit of bugfixing afterwards, and ongoing maintenance to keep up with ever-growing security challenges. Not really something I was wanting to go with, also because it may make my server's database more attractive to break into (though there aren't many different people with actual logins).

Another alternative is using delegated login via Facebook, Google, GitHub or others (the big question is who), using the OAuth2 protocol. Now there's two issues there: First, OAuth2 isn't really made for delegated login but for authentication of using some resource (via an API), so it doesn't return a login identifier (e.g. email address) but rather an access token for resources and needs another potentially failure-prone roundtrip to actually get such an identifier - so it's more complicated than e.g. Persona (because using it just for login is basically misusing it). Second, the OAuth2 providers I know of are entities to whom I don't want to tell every login on my content management system, both because their Terms of Service allow them to sell that information to anyone, and second because I don't trust them enough to know about each and every one of those logins.

Firefox Accounts would be an interesting option, given that Mozilla is trustworthy on the side of dealing with password data and wouldn't sell login data or things like that, may support the same BrowserID assertion/verification flow as Persona (which I have implemented already), but it doesn't (yet) support non-Mozilla sites to use it (and given that it's a CMS, I'd have multiple non-Mozilla sites I'd need to use it for). It also seems to support an OAuth2 flow, so may be an option with that as well if it would be open to use at this point - and I need something before Persona goes away, obviously.

Other options, like "passwordless" logins that usually require a roundtrip to your email account or mobile phone on every login sounded too inconvenient for me to use.

That said, I didn't find anything "better" as a Persona replacement than OAuth2, so I took an online course on it, then put a lot of time into implementing it and I have a prototype working with GitHub's implementation (while I don't feel to trust them with all those logins, I felt they are OK enough to use for testing against). That took quite some work as well, but some of the abstraction I did for Persona implementation can be almost or completely reused (in the latter case, I just abstracted things to a level that works for both) - and there's potential in for example getting some more info than an email from the OAuth2 provider and prefill some profile fields on user creation. That said, I'm still wondering about an OAuth2 provider that's trustworthy enough privacy-wise - ideally it would just be a login service, so I don't have to go and require people to register for a whole different web service to use my content management system. Even with the fallback alone and without the federation to IdPs, Mozilla Persona was nicely in that category, and Firefox Accounts would be as well if they were open to use publicly. (Even better would be if the browser itself would act as an identity/login agent and I could just get a verified email from it as some ideas behind BrowserID and Firefox Accounts implied as a vision.)

I was also wondering about potentially hosting my own OAuth2 provider, but then I'd need to devise secure password handling on my server yet again and I originally wanted to avoid that. And I'd need to write all that code - unless I find out how to easily run existing code for an OAuth2 or BrowserID provider on my server.

So, I'm not really happy yet but I have something that can go into production fast if I don't find a better variant before Persona shuts down for good. Do you, dear reader, face similar issues and/or know of good solutions that can help?

By KaiRo, at 21:21 | Tags: BrowserID, CBSM, identity, login, Mozilla, OAuth2, Persona | 2 comments | TrackBack: 0

September 8th, 2016

IDIC: Embrace Differences

The philosophy of IDIC or Infinite Diversity in Infinite Combinations has kept my mind going around quite much recently.

Well, if we want to go by the book, IDIC is actually seen as the basis of a philosophy, specifically that of Star Trek's Vulcan species, it's "native language" name is Kol-Ut-Shan, and it's symbolized by that really nice-looking jewel that has a triangle/pyramid with a marked point/ball on top and a circle around it (see image). That said, it ends up culminating Gene Roddenberry's philosophy behind a lot of what Star Trek depicts, and the philosophy that even 50 years (to this exact day) after the show first aired is still largely shared by the fans of the franchise (including myself).

What IDIC centers around is to increase and heavily embrace diversity in all things - and that can be applied to and give thought inspiration to many things.
Everything of course starts with Gene's vision of a lead crew as diverse as the mid-1960s would allow it, a United Federation of Planets that is a utopian in-between of UN and US in a galactic dimension, to other figures than white mean being in leadership positions in various incarnations of the franchise, and preserving diversity of life forms beyond the two-legged variety in various stories as well (if you like deeply digging into messages and philosophy of Star Trek episodes, the Mission Log Podcast may be something for you).
I like looking beyond Star Trek when it comes to this philosophy though. Take for example the genomes of life forms we know (in reality, on this planet) - no two life forms have the exact same genes, not even twins. Nature shows that "infinite" diversity (created from seemingly infinite combinations of very few elements) not because it's fun, or because our design sucks, or it's Vulcan, of course. It gives life an ingenious robustness by making it hard for attacks to affect large amounts of different individuals and species, it makes life forms complement each other to cover different environments, and adaptive to react to different circumstances.
And from all I hear from studies and see in practice, when we put together diverse groups of people, they usually excel in creativity and putting up different ideas, they are harder to control by a single bad influence, they develop more respect for other humans, higher sensitivity towards the needs of other people, deeper understanding of and respect for different persons - at least in comparison to many groups of people very similar to each other. Fun fact on the side, the crowd I see at Star Trek conventions is probably one of the most diverse group of "geeks" you can find (across gender, race, age, profession, and other criteria) - thanks to the role models and the philosophy put front and center in that franchise. That kind of diversity is something I want to see in many more areas of my life and around me. The more we get different people to sit down or stand together, the more we create and show role models of diversity enriching life, the more we get people to respect other people, no matter who they are, and the more we create a better world - and universe.

Now, what about things other than life forms? What for example about computer systems? About software?
There's a lot of people advocating for hardware, operating systems, software packages that are exactly the same for everyone, so it's easy to verify that they haven't been modified unduly, and that software updates are easier to apply. And that surely has merit in a number of dimensions, and reproducible builds, Flatpak and Snap, even reducing "fingerprintability" on the Web and quite a few other mechanisms exist to reduce differences between our systems.
But then, we as users of those computers and that software are all different. We want our systems to be personalized and therefore to be different from anyone else's system. We install different add-ons into our Firefox, different apps or applications on our computers and smartphones, log into different accounts on different websites, we want our system to be uniquely ours, or at least feel like it is that. So at some level, we as users want "infinite" diversity of computers. Different people may even want different screen numbers and sizes, have different focus on what is important for them that their computer does, desire different set-ups of the hardware on their home and/or work desks. And there are security reasons to put randomization (like ASLR and other RoP defense mechanisms) into our computer (runtime) setups in some cases. Would a higher degree of diversity on software make it harder for attacks to break a large amount of systems? Maybe, I don't know which benefits outweigh the others there.

It's clear that's a principle which works pretty decently in nature at a low level, and for groups of people at a high level, and we definitely should embrace it there. At which layers of our software and hardware it's useful or detrimental is not always entirely clear, but it has to work in personalizing our computer systems to our requirements, desires and wishes as we are all different and that diversity needs to end up being reflected so we can use its strength to work together and improve this world.

Thanks to Gene Roddenberry and Star Trek in general for giving me something interesting to think about - and Happy 50th Birthday Star Trek!

By KaiRo, at 23:52 | Tags: IDIC, Mozilla, Star Trek | no comments | TrackBack: 0

May 16th, 2016

Tools I Wrote for Crash (Stats) Analysis

Now that I'm off the job that dominated my life (and almost burned me out) for the last years, I finally have some time again to blog. And I'll start with stuff I actually did for that job, as I still am happy to help others to continue from where I left.

The more fun part of the stability management job was actually creating new analysis - and tools. And those tools are still helpful to people working on crash analysis or crash stats analysis now - so as my last task on the job, I wrote some documentation for the tools I had created.

One of the first things I created (and which was part of the original job description when I started) was a prototype for detecting crash "explosiveness", i.e. a detector for crashes that are rising significantly in volume. This turned out to be quite helpful for me and others to use, and the newest reports of it are listed in my Report Overview. I probably should talk about it in more detail at some point, but I did write up a plan on the wiki for the tool, and the (PHP) code is on hg.m.o (that was the language I knew best and gave me the fastest result for a prototype). I had plans to port/rewrite it in python, but didn't get to it. Calixte, who is looking after most of "my" tools now, is working on that though, and I have already promised to review his work as a volunteer so we can make sure we have this helpful capability in better code (and hopefully better UI in the end) for future use.

In general, I have created one-line docs for all the PHP scripts I had in the Mercurial repository, and put them into the run-reports script that is called by a daily cron job. Outside of the explosiveness script, most of those have been obsoleted by Socorro Super Search (yay for Adrian's work and for the ElasticSearch backend!) nowadays.

Also, the scripts that generate the summed-up data for Are We Stable Yet dashboard and graphs (also see an older blog post discussing the graphs) have been ported to python (thanks Peter for helping me to get started there) - and those are available in the Magdalena repository on GitHub. You'll see that this repository doesn't just have more modern code, using python instead of PHP and the public Socorro API instead of private PostgreSQL access, it also has a decent README documenting what it and every script in it does. :)

The most important tools for people analyzing crash stats are in the Datil repository on GitHub (and its deployment on crash-analysis), though. I used all those 4 dashboards/tools daily in the last months to determine what to report to Release Managers and other parties, find out what we need to file as bugs and/or push to get fixed. Datil, like Magdalena, has good docs right in the repository now, readable directly on GitHub.

So, what's there?
Well, the before-mentioned "Are We Stable Yet" dashboard and graphs, for sure (see the longtermgraph docs for what graphs you can get and a legend of what the lines mean).
There's also a tool/prototype for "what's important" weighed top crash lists that I called "Top Crash Score", see the score docs for what it does and examples on how to use that tool.
And finally, I created a search query comparison tools that did let me answer questions like "which crashes happen more with or without multi-process support (e10s) being active?" or "which crashes have vanished with the new beta and which have appeared (instead)?" - which was incredibly helpful to me at least. Read the searchcompare docs for more details and examples.

I probably won't spend a lot of time with those tools any more, neither in usage nor in development, but I'm still happy about people using them, giving me feedback, and I'm also happy to review and merge pull requests that feel like making sense to me!

By KaiRo, at 22:33 | Tags: analysis, CrashKill, explosiveness, Mozilla, stability | no comments | TrackBack: 0

May 4th, 2016

Projects Done, Looking For New Ones

I haven't been blogging much recently, but it's time to change that - like multiple things in my life that are changing right now.

I'll start with the most important piece first: My contract with Mozilla is ending in a week.

I had been accumulating frustration with pieces of my role that were founded in somewhat tedious routine like the whack-a-mole on crash spikes which was not very rewarding as well as never really giving time to breath and then overworking myself trying to get the needed success experiences in things like building dashboards and digging into data (which I really liked).
Being very passionate about Mozilla's Mission and Manifesto and identifying with the goals of my role I could for years paper over this frustration and fatigue but it kept building up in the background until it started impairing my strongest skill: communication with other people.

So, we had to call an end to this particular project - a role like this is never "finished", but it's also far from "failed" as I accomplished quite a bit over those 5 years, in various variants of the role.

After some cooldown and getting this out of my system, I'm happy to take on a new role of project management, possibly combined with some data analysis, somewhere, hopefully in an innovative area that aligns with my interests and possibly my passion for people being in control of their own lives.

As for Mozilla, no matter if an opportunity for work comes up there, I will surely stay around in the community, as I was before - after all, I still believe in the project and our mission and expect to continue to do so.

In other project management news, I just successfully finished the project of taking over my new condo and move in within a week. It took quite some coordination and planning beforehand, being prepared for last-minute changes, communicating well with all the different involved people and making informed but swift decisions at times - and it worked out perfectly. Sure, to put it into IT terms, there are still a few "bugs" left (some already fixed) and there's still a lot of followup work to do (need more furniture etc.) but the project "shipped" on time.

I'm looking forward to doing the same for future work projects, wherever they will manifest.

By KaiRo, at 16:51 | Tags: burnout, CrashKill, Mozilla, project management, stability, stress | no comments | TrackBack: 0

October 13th, 2015

Shortening Crash Signatures: Dropping Argument Lists

Crash signatures derived from the function on the stack are how we group ("bucket") crashes as belonging to a certain issue or bug. They should be precise enough to identify this "bucket" but also short enough so we can handle it as a denominator in lists and when talking about those issues. For some time, we have seen that our signatures are very long-winded and contain parts that make it sometimes even harder to bucket correctly. To fix that, we set out to shorten our crash signatures.

We already completed a first step of this effort in June: After we found that templates in signatures were often fluctuating wildly in crashes that belonged to the same bug, all <sometemplate> parts of crash signatures were replaced by just <T>.

That made a signature like this (from bug 1045509, the [@ …] are our customary delimiters for signatures, not really part of the signature itself though):
[@ nsTArray_base<nsTArrayFallibleAllocator, nsTArray_CopyWithMemutils>::UsesAutoArrayBuffer() | nsTArray_Impl<unsigned char, nsTArrayFallibleAllocator>::SizeOfExcludingThis(unsigned int (*)(void const*)) ]

be shortended to:
[@ nsTArray_base<T>::UsesAutoArrayBuffer() | nsTArray_Impl<T>::SizeOfExcludingThis(unsigned int (*)(void const*)) ]

Which is definitely somewhat better to read and put in tables like topcrash reports, etc. - and we found it did not munge bugs together into the same signature more than previously, at least to our knowledge.

But we found out we can go even further: Different argument lists of functions (mostly due to overloading) did as far as I remember not help us distinguish any bugs in the >4 years I have been working with crashes - but patches changing types of arguments or adding one to a function often made us lose the connection between a bug and the signature. Therefore, we are removing argument lists from the signatures.

The signature listed above will turn out as:
[@ nsTArray_base<T>::UsesAutoArrayBuffer | nsTArray_Impl<T>::SizeOfExcludingThis ]

Today, we have run a script on Bugzilla (see bug 1178094) to update all affected bugs to add the new shortened signature to the Crash Signatures field without sending a ton of bugmail.

We have tested in the last weeks that Socorro crash-stats can create the new shortened signatures fine on their staging setup and that generation of the special "shutdownhang | …" signatures for browser processes that did take more than 60s to shut down and "OOM | …" for out-of-memory crashes do still work in all cases where they worked before.

As all preparation has been done, we will flip the switch on production Socorro crash-stats in the next days, and then those shortened signatures will be created everywhere.

Note that this will impede some stats that are comparing signatures across days, even though we will see to reprocess some crashes to make the watershed be at a UTC day delimiter so that as few stats as possible are disturbed by the change.

Please let me know of any issues with those changes (as well as any other questions about or issues with crash analysis), and thanks to Lars, Byron (glob) and others who helped with those changes!

By KaiRo, at 20:14 | Tags: Bugzilla, CrashKill, Mozilla, Socorro | 2 comments | TrackBack: 0

August 24th, 2015

Ending Development and Support for My Add-ons

This has been a long time coming, actually, and recent developments just put the final nail in the coffin.

I am ending all development and support for my "extension"-type add-ons effective immediately.

This affects (daily user numbers according to
If anyone is interested in taking over development and maintenance of any of those, please let me know and I'm happy to convert their repositories over to github for easier working with them, and and the new developer to their administration on AMO and/or move them over to you completely.

I will leave them listed on AMO for a little while so people who want to take over can take a look, but I will hide them from the site in the near future if nobody is interested.

The reasons for this step are multiple:

For one thing, I just don't have the time for updating their code or improving them. My job is stressful enough that my head is overflowing with Mozilla-related things all the time, and my employer is apparently not willing to give me any relief (in terms of hiring someone to supplement me) that would give me back sanity, so I need to remove some Mozilla- and software-related thing from my non-work time to gain back a little sanity so that I don't burn out.

I am also really sad that apparently nobody finds the time or energy to make decent managing and notification mechanisms available for UI code around the new-style web storage mechanisms like indexedDB, appCache, or ServiceWorkers caching, while we do have quite nice APIs there for long-standing things like cookies. For getting Tahoe Data Manager (which was my most interesting add-on) to work decently, I would have needed decent APIs there as well.

Then, my interest for experimenting with code has moved more and more away from the browser, which keeps changing around me all the time, and towards actual web development, where existing code doesn't get broken all the time and your code is more isolated. As a bonus, I can develop things that run on my (Firefox OS) phone and that I can show other people when I'm somewhere. And even there, I don't get as much time to dig into stuff as I would like to, see above.

And finally, and that's why this culminates right now, I disagree with some pieces of Mozilla's add-on strategies right now, and I don't want to be part of that as an add-on developer.
For one, I think add-on signing is a good idea in principle, but not enabling developers to test their code in any way in the same builds that users get is against everything I learned in terms of quality assurance. Then, requiring developers and other users of unbranded (or early pre-release) builds to turn off security for everything just to use/test one or two unsigned add-ons just feels plainly wrong to me (and don't tell me it can't be done otherwise, as I know there are perfectly good ways to solve this without undermining signing and preserving more safety). And I also fear that, while add-on signing brings a lot of pain to add-on developers and will make us lose some of them and their users, we will not reduce the malware/adware problem in the mid to long term, but rather make it worse, as they will resort to injecting binary DLLs into the Firefox process, which is the primary cause of startup crashes on updates, and I will have more grief in my actual job due to this, next to Firefox losing users that see those crashes.
And on the deprecation of "the permissive add-on model" as they call it in the post, I think that the Firefox UI being written in web (CSS/JS/HTML) or web-like (XUL) technologies and the ability to write add-ons that can use those to do anything in Firefox, including prototyping and inventing new functionality and UI paradigms, is the main thing that sets Firefox apart product-wise from all its competitors. If we take that away, there is no product reason for using Firefox over any other browser, the only reasons will be the philosophy behind Mozilla (which is what I'm signed up for anyhow)and the specific reflection of those in some internals of the browser, like respecting privacy and choice a little bit more than others - but most people consider that details, and it's hard to win them over with those. Don't get me wrong, I think that the WebExtensions API is a great idea (and it would be awesome to standardize some bits of it across browsers), and add-ons being sandboxed by default is long overdue. But we also would need to require less signing and review for add-ons that are confined to the safe APIs provided there, and I think we'd still, with heavy review, signing, and whatnot, need to allow people to go fully into the guts of Firefox, with full permissions, to provide the basis for the really ground-breaking add-ons that set us apart from the rest of the world. Even though almost all of the code of my add-ons ran within their own browser tab, they required a good reach into high-permission areas, which probably the new WebExtensions API will not allow that way. But I also do not even have the time to investigate how I could adapt my add-ons to any of this, so I decided to better pull the plug right now.

So, all in all, I probably have waited too long with this anyhow, mostly because I really like Tahoe Data Manager, but I just can't go on pretending that I will still develop or even maintain those add-ons.

Again, if anyone is interested in taking over, either fully or with a few patches here and there, please contact me and I'll help to make it happen.

(Note that this does not affect my language packs, dictionaries, or themes at this point, I'm continuing to maintain and develop them, at least for now.)

By KaiRo, at 17:14 | Tags: Add-Ons, Data Manager, download manager, Firefox, Mandelbrot, Mozilla | 4 comments | TrackBack: 0

April 27th, 2015

"Nothing to Hide"?

I've been bothered for quite a while with people telling me they "have nothing to hide anyhow" when the topic of Internet privacy comes up.

I guess that mostly comes from the impression that the whole story is our government watching (over) us and the worst thing that can happen is incrimination. While that might threaten some things, most people do nothing that is really interesting enough for a government to go into attack mode over it (or so they believe, and very firmly so). And I even agree that most governments (including the US and EU countries) actually actively seek out what they call "terrorist activities" (even though they often stretch that term in crazy ways) and/or child abuse and similar topics that the vast majority of citizens agree are a bad thing and are not part of - and the vast majority of politicians and government workers believe they act in the best interest of their citizens when "obviously fighting that" via their different programs of privacy-undermining surveillance. That said, most people seem to be OK with their government collecting data about them as long as it's not used to incriminate them (and when that happens, it's too late to protest the practice anyhow).

A lot has been said about that since the "Snowden leaks", but I think the more obvious short-term and direct threat is in corporate surveillance, which has been swept under the rug in most discussions recently - to the joy of Facebook, Google and other major players in that area. I have also seen that when depicting some obvious scenarios resulting of that, people start to think about it much more promptly and realize the effect on their daily lives (even if those are minor issues compared to government starting a manhunt against you with terror allegations or similar).

So what I start asking is:
  • Are you OK with banks determining your credit conditions based on all his comments on Facebook and his Google searches? ("Your friends say you owe them money, and that you live beyond your means, this is gonna be difficult...")
  • Are you OK with insurances changing your rates based on all that data? ("Oh, so you 'like' all those videos about dangerous sports and that deafening music, and you have some quite aggressive or even violent friends - so you see why we need to go a bit higher there, right?")
  • Are you OK with prices for flights or products in online stores (Amazon etc.) being different depending on what other things you have done on the web? ("So, you already planned that vacation at that location, good, so we can give you a higher air rate as you' can't back out now anyhow.")
  • And, of course, envision ads in public or half-public locations being customized for whoever is in the area. ("You recently searched for engagement rings, so we'll show ads for them wherever you go." or "Hey, this is the third time today we sat down and a screen nearby shows Viagra ads." or "My dear daughter, why do we see ads for diapers everywhere we go?")
There are probably more examples, those are the ones that came to my mind so far. Even if those are smaller things, people can relate to them as they affect things in their own life and not scenarios that feel very theoretical to them.

And, of course, they are true to a degree even now. Banks are already buying data from Facebook, probably including "private" messages, for determining credit scores, insurances base rates on anything they can find out about you, flight rates as well as prices for some Amazon and other web shop products vary based on what you searched before - and ads both on your screen and even on postal mail get tailored to a profile built on all kinds of your online behavior. My questions above just take all of those another step forward - but a pretty realistic one in my opinion.

I hope thinking about questions like that makes people realize they might actually want to evade some of that and in the end they actually have something to hide.

And then, of course, that a non-profit like Mozilla, which doesn't seek to maximize money, can believably be on their side and help them regain some privacy where they - now - want to.

By KaiRo, at 00:38 | Tags: Internet, Mozilla, privacy | 8 comments | TrackBack: 0

August 22nd, 2014

Mirror, Mirror: Trek Convention and FLOSS Conferences

It's been a while since I did any blogging, but that doesn't mean I haven't been doing anything - on the contrary, I have been too busy to blog, basically. We had a few Firefox releases where I scrambled until the last day of the beta phase to make sure we keep our crash rates as low as our users probably expect by now, I did some prototyping work on QA dashboards (with already-helpful results and more to come) and helped in other process improvements on the Firefox Quality team, worked with different teams to improve stability of our blocklist ping "ADI" data, and finally even was at a QA work week and a vacation in the US. So plenty of stuff done, and I hope to get to blog about at least some pieces of that in the next weeks and months.

That said, one major part of my recent vacation was the Star Trek Las Vegas Convention, which I attended the second time after last year. Since back then, I wanted to blog about some interesting parallels I found between that event (I can't compare to other conventions, as I've never been to any of those) and some Free, Libre and Open Source Software (FLOSS) conferences I've been to, most notably FOSDEM, but also the larger Mozilla events.
Of course, there's the big events in the big rooms and the official schedule - on the conferences it's the keynotes and presentations of developers about what's new in their software, what they learned or where we should go, on the convention it's actors and other guests talking about their experiences, what's new in their lives, and entertaining the crowd - both with questions from the audience. Of course, the topics are wildly different. And there's booths at both, also quite a bit different, as it's autograph and sales booths on one side, and mainly info booths on the other, though there are geeky T-shirts sold at both types of events. ;-)

The largest parallels I found, though, are about the mass of people that are there:
For one thing, the "hallway track" of talking to and meeting other attendees is definitely a main attraction and big piece of the life of the events on both "sides" there. Old friendships are being revived, new found, and the somewhat geeky commonalities are being celebrated and lead to tons of fun and involved conversations - not just the old fun bickering between vi and emacs or Kirk and Picard fans (or different desktop environments / different series and movies). :)
For the other, I learned that both types of events are in the end more about the "regular" attendees than the speakers, even if the latter end up being featured at both. Especially the recurring attendees go there because they want to meet and interact with all the other people going there, with the official schedule being the icing on the cake, really. Not that it would be unimportant or unneeded, but it's not as much the main attraction as people on the outside, and possibly even the organizers, might think. Also, going there means you do for a few days not have to hide your "geekiness" from your surroundings and can actively show and celebrate it. There's also some amount of a "do good" atmosphere in both those communities.
And both events, esp. the Trek and Mozilla ones, tend to have a very inclusive atmosphere of embracing everyone else, no matter what their physical appearance, gender or other social components. And actually, given how deeply that inclusive spirit has been anchored into the Star Trek productions by Gene Roddenberry himself, this might even run deeper in the fans there than it is in the FLOSS world. Notably, I saw a much larger amount of women and of colored people on the Star Trek Conventions than I see on FLOSS conferences - my guess is that at least a third of the Trek fans in Las Vegas were female, for example. I guess we need some more role models in they style of Nichelle Nichols and others in the FLOSS scene.

All in all, there's a lot of similarities and still quite some differences, but quite a twist on an alternate universe like it's depicted in Mirror, Mirror and other episodes - here it's a different crowd with a similar spirit and not the same people with different mindsets and behaviors.
As a very social person, I love attending and immersing myself in both types of events, and I somewhat wonder if and how we should have some more cross-pollination between those communities.
I for sure will be seen on more FLOSS and Mozilla events as well as more Star Trek conventions! :)

By KaiRo, at 17:09 | Tags: community, FOSDEM, Las Vegas, Mozilla, Star Trek | no comments | TrackBack: 0

May 12th, 2014

Hacking Day And Linuxwochen

In the last weeks, I spent some time on developer events, which I always embrace a lot because the "hallway tracks" are a constant great flow of interesting information and discussions - and of course the official talks are often quite interesting as well! :)

Image No. 23215

First up was the Mozilla Hacking Day in Berlin on Saturday, April 26. I met Arpad Borsos, fellow Mozillian from Vienna, already at Vienna airport on Friday, as we incidentally were hopping over from the Austrian to the German capital in the same plane. Later in Berlin, we met more Mozillians at the hotel and went out for dinner together.
At the actual Hacking Day, we had ~70 people streaming in for some introductory talks, and then people split up into hacking sessions and some more in-depth talks (like the one in the picture, where fellow "Desktop QA" team member Henrik talked about automated testing), which were followed by even more hands-on hacking. I spent most of the time talking to various people about different topics, showing off Firefox OS a bit (after I had reflashed my Geeksphone Peak and got it into a working stage again during the talks) and my actual "hacking" achievement: Together with Georg Fritzsche, we could find out that an issue with my Lantea Maps app freezing was a WebGL-related bug in Gecko, found a stack, reported it as a new bug and set a needinfo flag to a gfx developer. (Yesterday, I found a hacky workaround so Lantea Maps will soon work again despite that bug.)
On Sunday, I also was able to squeeze in some sightseeing and photo-taking, marching through Berlin for ~3.5h before returning to the airport.

Image No. 23216 Image No. 23217 Image No. 23218

And last week, May 8-10, it was time for the yearly Linuxwochen conference in Vienna. As we do not have an organized group of Mozillians here, we didn't have a Mozilla booth, but I usually share the OpenStreetMap booth (that's why you'll spot that in the pictures), as I'm pretty active in the local community there, and just run around with Mozilla T-shirts and talk about Mozilla stuff in addition to OSM things. I also had submitted talks about Firefox OS and the Makey Makey, but as you'll see in my speaker profile, I was additionally "coerced" into a privacy and security panel discussion last-minute as well. I was put there as more or less a representative of Mozilla and more in general those doing end-user software, and of course also tried to drum home a few points of how Mozilla strongly works for respecting user privacy but also pointed out how it is an interesting field that sometimes has tradeoffs with enabling features that people out there really want, and finding a balance can be hard.
The Firefox OS talk was quite appreciated by people attending, and we had quite a few good questions asked there as well, and interest from people afterwards to actually see and try my FxOS phones - I remember how some hard a hard time believing how well the ZTE Open worked with just 256MB of RAM and an all-web UI. :)
In the "hallway track", I did get a lot of questions about Mozilla on all three days, given my T-shirts clearly pointed out my affiliation with the project. Some of those were Firefox OS, mostly about strategy and availability, where I had to explain a few times how we target features phone converts in emerging markets for now. A number of the concerns and questions were around Australis, with quite a few of the very technical folks there not liking how we messed up their customizations and preferences, including tops-on-bottom or the add-ons bar, or how esp. our tabs looked "just like Chrome" now, but I even heard one or two comments on how awesome the new design was. Some of those I could ease with explanation and pointing to the Classic Theme Restorer add-on, but some of them are just unhappy (and it's not helpful that the very helpful Tour does not run when NoScript is installed and active). And then there were questions about our finances (the "Google dependency" issue) and about the fact that new Sync clashes with Master Password (which also protects casual "friends" from reading through passwords in your password manager), among others. Overall, a lot of talk about Mozilla, and I hope I could make most of those people feel better about us than before, wherever they stood then.

Both events cost me a lot of sleep and energy right there, but feel like they were worth the investment for sure, and the meeting and talking to people also gives me energy of a different form. ;-)

By KaiRo, at 18:42 | Tags: Lantea, Linuxwochen, MaKey MaKey, Mozilla, OSM | no comments | TrackBack: 0

May 1st, 2014

Finding an Openly Licensed JS Graph Library

As I mentioned at the end of the recent post on stability graphs, I was looking for doing JS-based, dynamic versions of those graphs. Those would be able to always have current data without manually copying stuff around (as I've done previously) and should be available to more people in the Mozilla community.

That said, even though I tried previously to do some canvas magic to create graphs in JS, it's tedious to create all that code by oneself, so I set out to find a library that can help me.

My set of ideal requirements was:
  • It should do graphs only, my code can handle the rest.
  • It should be lightweight and easy to integrate.
  • It must be Free Software (i.e. have an "open source" license that allows all freedoms we are used to for code at Mozilla).
  • It should support mousing over the graph series and getting exact values.
  • Possibly, have annotations for certain points in the graph ("this spike was the Google doodle crash" or "this day, was had data anomalies").
  • Potentially allow zooming into a certain time frame as I have long time series in those graphs.
  • Maybe it would be nice if it could read my JSON data directly, but I need some processing of the numbers anyhow, so JS objects/arrays or such as data input is OK as well.

So, I went on a web search and tried to find libraries that would fulfill my requirements. Interestingly, I saw a number of commercially licensed JS libraries floating around, I guess there's some business in creating graphs out there.
The list below is the libraries I took a closer look at (before you ask, flot is not on the list because it requires jQuery and therefore violates my "graphs only" and "lightweight" goals right away):
  • D3.js
    • Can do lots of things, including tons of animations.
    • Centers around reorganizing DOM access, i.e. does way more than what I like and violates "graphs only" and "lightweight" goals just as much as jQuery/flot.
  • Chart.js
    • This one is lightweight and graph-only.
    • Examples do not have dynamic tooltips but a number of animations that are only distracting from the actual graph data.
  • RGraph
    • Dynamic tooltips in the examples are a bit sluggish.
    • Supports feeding JSON to it directly.
  • dygraphs
    • Has interactive zoom etc. by default, supports annotations for specific points.
    • Has somewhat of a list of dependencies (all open).
    • Only supports CSV or JS Array data.

So none of those turned out to be completely ideal in the light of my goals/preferences. That said, dygraphs looked decent enough so I went with that in the end and have dynamic versions of my graphs implemented.
(Ping me on IRC if you want a link, I don't want to link them on the blog as the uninformed could come to wrong or hasty conclusions looking at the data).

Of course, if you are looking for a graphs library for your project, you might chose completely differently, but maybe the list is helpful to find what fits your needs. ;-)

By KaiRo, at 01:12 | 6 comments | TrackBack: 0

Feeds: RSS/Atom