The roads I take...

KaiRo's weBlog

März 2024
123
45678910
11121314151617
18192021222324
25262728293031

Zeige die letzten Beiträge auf Englisch an. Zurück zu allen aktuellen Beiträgen

Populäre Tags: Mozilla, SeaMonkey, L10n, Status, Firefox

Verwendete Sprachen: Deutsch, Englisch

Archiv:

Juli 2023

Februar 2022

März 2021

weitere...

11. Juli 2023

Integrating Magento 2 Shop With FreeFinance and Custom Merchandise Management

In the last few months, I have been building up a new business called Trade Post 47. While we envision it as a little space station in orbit of a nice planet, to most people it will be a science fiction merchandise trading company, with an online shop and booths at events like local Comic-Cons in Central Europe, especially in Austria. If you want to learn more, we have put up a complete page about us on our shop website.
Image No. 23537

To manage our products, which we get from different vendors (sometimes the same product via different vendors) as well as plan and manage our orders, I built an internal, custom merchandise management in my own PHP framework or CMS CBSM (which is also used for this blog, for example). I did this mostly out of convenience as I have and maintain this system anyhow and I needed some database tables with fitting UI for managing our merchandise, vendors, and more (even conventions we may want to run booths at).

OTOH, the public shop is (as you may notice when looking at the website) an installation of Magento 2 (i.e. the open-source version of what is nowadays called "Adobe Commerce"). We decided to run that system because we are partnering closely with MCO Shop, which is a local ham radio and electronics shop, and they already had this software running previously on the servers we share and know how to work with it, run the upgrades, and maintain it. After all, when building a new business, as in so many areas of life, it always helps if you can share some resources and knowledge with others. First, I adapted the Magento theme to make it look more "space-like", mostly importantly, having a dark instead of light background. Once that worked well enough, I still had to get those products that we actually ordered from my custom management system into this Magento shop. Initially, I did this via creating a big CSV file and importing that into the shop, but it was clear that we needed a more fine-grained solution in the long run that can add and update entries individually.

Additionally, when we run booths on our "away missions" to events/conventions (or whenever we otherwise sell anything in person), Austrian law requires us to use a cash register system that follows strict rules and passes a certification so that the government can be sure we pay taxes for everything we sell. For that we decided to use a solution integrated into our bookkeeping system, which runs online as a web service as well, a specialized Austrian solution called FreeFinance. And of course, the cash register needs a full list of products and prices as well, which we also initially solved with a CSV creation and import in anticipation of a more fine-grained solution after our first big appearance at Austria Comic-Con in early June.
Image No. 23538

As icing on the cake, we also wanted to generate nicely styled invoice documents in FreeFinance for all online shop orders that weren't paid via the cash register, and in the future, we'll want to make the online shop automatically aware of merchandise sold at events so they are removed from available stock for online purchases.

To achieve that, I looked into the APIs that both Magento and FreeFinance provide, accessing them from the custom internal system that I have full access to and that is required for providing the merchandise data anyhow. I found that the FreeFinance API is relatively simple, well-documented and does authentication via OAuth2, which I already had some knowledge of (and code to access it) from other projects, including some code already in the CBSM system for facilitating its own logins. That said, Magento is a different beast: its product catalog feature set is way more complex, and so is its API. Also, there is no well-structured collective documentation that would explain what various things mean or what is preferably done in what way (often there are multiple paths to the same result), it's a lot of turning on developer mode so that a Swagger/OpenAPI UI is available on your installation and then trying around there and searching the web for what could work how and what value could mean what. In addition, authentication is done via OAuth1, which is more complicated than its successor, and which I didn't have any pre-existing code for, though I could build on some code from their tutorial. Also, as we're running Magento ourselves on the server side, I could more easily try around things than with FreeFinance, which is a hosted service and I needed to request access credentials from their team. But FreeFinance gave us access to a testing system, whereas for Magento, for various reasons, we only have a live system and no staging/testing environment, so we can't "play around" very much when testing.

I wrote quite a bit of code for all those cases, the simplest part was and is surely updating the cash register with our products, the only slight complication there is adding categories if needed. For adding products to the shop, I needed to respect all kinds of things, like creating and managing configurable products, adding values to some attributes, uploading images, managing categories, and more. And the curious structure of the Magento API, which requires way more detailed action than the CSV import route, did at times make this even more complicated - but it works now and I can just add or change a product in the merch management and at the latest on the next day, both the shop and the cash register have updated to those changes (I can trigger the sync jobs earlier if required). For creating the invoice documents, I could base some things on a make.com "blueprint" provided by FreeFinance, but for one thing, we don't want to use a paid third-party service if we can automate this ourselves, and for the other, we have some restrictions and specialties of our own there (like only generating invoices for orders actually paid via the web shop payment integration and not in person via the cash register). I did run into some curiosities there, like the Magento order API result containing several pieces of data multiple times, or us initially using a document layout template that didn't allow for different products having different VAT rates (which we require) - but that's working now as well. The reverse part about getting cash register purchases into the online shop is still on my plate, but I now have a good plan for how to do that, and some time until our next big "away mission" where this will be important to have.

All in all, this has been a quite interesting experience, and I'm sure now that I am comfortable with working with those systems and APIs, I will do more with them in the long run - and our Trade Post 47 hopefully will still grow as well and therefore have additional requirements in the future. If you are a developer and have questions about some details, feel free to contact me - and if you are running such systems yourself and need a developer who can adapt them in a similar fashion, I'm happy to offer those services as a contractor!

Von KaiRo, um 03:51 | Tags: API, FreeFinance, Magento, merch, PHP, science fiction, Shop, Trade Post 47 | keine Kommentare | TrackBack: 0

27. Februar 2022

Connecting the Mozilla Community

After some behind-the-scenes discussions with Michael Kohler on what I could contribute at this year's FOSDEM, I ended up doing a presentation about my personal Suggestions for a Stronger Mozilla Community (video is available on the linked page). While figuring out the points I wanted to talk about and assembling my slides for that talk, I realized that one of the largest issues I'm seeing is that the Mozilla community nowadays feels very disconnected to me, like several islands, within each there is good stuff being done, but most people not knowing much about what's happening elsewhere. That has been helped a lot by a lot of interesting projects being split off Mozilla into separate projects in recent years (see e.g. Coqui, WebThings, and others) - which is often taking them off the radar of many people even though I still consider them as being part of this wider community around the Mozilla Manifesto and the Open Web.

Following the talk, I brought that topic to the Reps Weekly Call this last week (see linked video), esp. focusing on one slide from my FOSDEM talk that talks about finding some kind of communication channel to cross-connect the community. As Reps are already a somewhat cross-function community group, my hope is that a push from that direction can help getting such a channel in place - and figuring out what exactly is a good idea and doable with the resources we have available (I for example like the idea of a podcast as I like how those can be listened to while traveling, cooking, doing house work, and others things - but it would be a ton of work to organize and produce that).
Some ideas that came up in the Reps Call were for example a regular newsletter on Mozilla Discourse in the style of the MoCo-internal "tl;dr" (which Reps have access to via NDA), but as something that is public, as well as from and for the community - or maybe morphing some Reps Calls regularly into some sort of "Community News" calls that would highlight activities around the wider community, even bringing in people from those various projects/efforts there. But there may be more, maybe even better ideas out there.

To get this effort to the next level, we agreed that we'll first get the discussion rolling on a Discourse thread that I started after the meeting and then probably do a brainstorming video call. Then we'll take all that input and actually start experimenting with the formats that sound good and are practically achievable, to find what works for us the best way.

If you have ideas or other input on this, please join the conversation on Discourse - and also let us know if you can help in some form!

Von KaiRo, um 18:15 | Tags: community, FOSDEM, Mozilla, Reps | keine Kommentare | TrackBack: 1

31. März 2021

Is Mozilla Still Needed Nowadays?

tl;dr: Happy 23rd birthday, Mozilla. And for the question: yes.

Here's a bit more rambling on this topic...

First of all, the Mozilla project was officially started on March 31, 1998, which is 23 years ago today. Happy birthday to my favorite "dino" out there! For more background, take a look at my Mozilla History talk from this year's FOSDEM, and/or watch the "Code Rush" documentary that conserved that moment in time so well and also gives nice insight into late-90's Silicon Valley culture.

Now, while Mozilla initially was there to "act as the virtual meeting place for the Mozilla code" as Netscape was still there with the target to win back the browser market that was slipping over to Micosoft. The revolutionary stance to develop a large consumer application in the open along with the marketing of "hack - this technology could fall into the right hands" as well as the general novenly of the open-source movement back then - and last not least a very friendly community (as I could find out myself) made this young project grow fast to be more than a development vehicle for AOL/Netscape, though. And in 2003, a mission to "preserve choice and innovation on the Internet" was set up for the project, shortly after backed by a non-profit Mozilla Foundation, and then with an independently developed Firefox browser, implementing "the idea [...] to design the best web browser for most people" - and starting to take back the web from the stagnation and lack of choice represented by >95% of the landscape being dominated by Microsoft Internet Explorer.

The exact phrasing of Mozilla's mission has been massages a few times, but from the view of the core contributors, it always meant the same thing, it currently reads:
Quote:
Our mission is to ensure the Internet is a global public resource, open and accessible to all. An Internet that truly puts people first, where individuals can shape their own experience and are empowered, safe and independent.
On the Foundation site, there's the sentence "It is Mozilla’s duty to ensure the internet remains a force for good." - also pretty much meaning the same thing with that, just in less specific terms. Of course, the spirit of the project was also put into 10 pretty concrete technical principles, prefaced by 4 social pledges, in the Mozilla Manifesto, which make it even more clear and concrete what the project sees as its core purpose.

So, if we think about the question whether we still need Mozilla nowadays, we should take a look if moving in that direction is still required and helpful, and if Mozilla is still able and willing to push those principles forward.

When quite a few communities I'm part of - or would like to be part of - are moving to Discord or are adding it as an additional option to Facebook groups, and I read the Terms of Services of those two tightly closed and privacy-unfriendly services, I have to conclude that the current Internet is not open, not putting people first, and I don't feel neither empowered, safe or independent in that space. When YouTube selects recommendations so I live in a weird bubble that pulls me into conspiracies and negativity pretty fast, I don't feel like individuals can shape their own experience. When watching videos stored on certain sites is cheaper or less throttled than other sources with any new data plan I can get for my phone, or when geoblocking hinders me from watching even a trailer of my favorite series, I don't feel like the Internet is equally accessible to all. Neither do I when political misinformation is targeted at certain groups of users in election ads on social networks without any transparency to the public. But I would long for that all to be different, and to follow the principles I talked of above. So, I'd say those are still required, and would be helpful to push for.

It all feels like we need someone to unfck the Internet right now more than ever. We need someone to collect info on what's wrong and how it could get better there. We need someone to educate users, companies and politicians alike on where the dangers are and how we can improve the digital space. We need someone who gives us a fast, private and secure alternative to Google's browser and rendering engine that dominates the Internet now, someone to lead us out of the monoculture that threatens to bring innovation to a grind. Someone who has protecting privacy of people as one of their primary principles, and continues work on additional ways of keeping people safe. And that's just the start. As the links on all those points show, Mozilla tries hard to do all that, and more.

I definitely think we badly need a Mozilla that works on all those issues, and we need a whole lot of other projects and people help in the space as well. Be it in advocacy, in communication, in technology (links are just examples), or in other topics.

Can all that actually succeed in improving the Internet? Well, it definitely needs all of us to help, starting with using products like Firefox, supporting organizations like Mozilla, spreading the word, maybe helping to build a community, or even to contribute where we can.

We definitely need Mozilla today, even 23 years after its inception. Maybe we need it more than ever, actually. Are you in?

CC-BY-SA The text of this post is licensed under Creative Commons BY-SA 4.0.

Von KaiRo, um 23:32 | Tags: history, manifesto, mission, Mozilla | keine Kommentare | TrackBack: 0

17. März 2021

Crypto stamp Collections - An Overview

Image No. 23482

As mentioned in a previous post, I've been working with the Capacity Blockchain Solutions team on the Crypto stamp project, the first physical postage stamp with a unique digital twin, issued by the Austrian Postal Service (Österreichische Post AG). After a successful release of Crypto stamp 1, one of our core ideas for a second edition was to represent stamp albums (or stamp collections) in the digital world as well - and not just the stamps themselves.

We set off to find existing standards on Ethereum contracts for grouping NFTs (ERC-721 and potentially ERC-1155 tokens) together and we found that there are a few possibilities (like EIP-998) but those ares getting complicated very fast. We wanted a collection (a stamp album) to actually be the owner of those NFTs or "assets" but at the same time being owned by an Ethereum account and able to be transferred (or traded) as an NFT by itself. So, for the former (being the owner of assets), it needs to be an Ethereum account (in this case, a contract) and for the latter (being owned and traded) be a single ERC-721 NFT as well. The Ethereum account should not be shared with other collections so ownership of an asset is as transparent as people and (distributed) apps expect. Also, we wanted to be able to give names to collections (via ENS) so it would be easier to work with them for normal users - and that also requires every collection to have a distinct Ethereum account address (which the before-mentioned EIP-998 is unable to do, for example). That said, to be NFTs themselves, the collections need to be "indexed" by what we could call a "registry of Collections".

To achieve all that, we came up with a system that we think could be a model for future similar project as well and would ideally form the basis of a future standard itself.

Image No. 23486

At its core, a common "Collections" ERC-721 contract acts as the "registry" for all Crypto stamp collections, every individual collection is represented as an NFT in this "registry". Additionally, every time a new NFT for a collection is created, this core contract acts a "factory" and creates a new separate contract for the collection itself, connecting this new "Collection" contract with the newly created NFT. On that new contract, we set the requested ENS name for easier addressing of the Collection.
Now this Collection contract is the account that receives ERC-721 and ERC-1155 assets, and becomes their owner. It also does some bookkeeping so it can actually be queried for assets and has functionality so the owner of the Collection's own NFT (the actual owner of the Collection itself) and full control over those assets, including functions to safely transfer those away again or even call functions on other contracts in the name of the Collection (similar to what you would see on e.g. multisig wallets).
As the owner of the Collection's NFT in the "registry" contract ("Collections") is the one that has power over all functionality of this Collection contract (and therefore the assets it owns), just transferring ownership of that NFT via a normal ERC-721 transfer can give a different person control, and therefore a single trade can move a whole collection of assets to a new owner, just like handing a full album of stamps physically to a different person.

To go into more details, you can look up the code of our Collections contract on Etherscan. You'll find that it exposes an ERC-721 name of "Crypto stamp Collections" with a symbol of "CSC" for the NFTs. The collections are registered as NFTs in this contract, so there's also an Etherscan Token Tracker for it, and as of writing this post, over 1600 collections are listed there. The contract lets anyone create new collections, and optionally hand over a "notification contract" address and data for registering an ENS name. When doing that, a new Collection contract is deployed and an NFT minted - but the contract deployment is done with a twist: As deploying a lot of full contracts with a larger set of code is costly, an EIP-1167 minimal proxy contract is deployed instead, which is able to hold all the data for the specific collection while calling all its code via proxying to - in this case - our Collection Prototype contract. This makes creating a new Collection contract as cheap as possible in terms of gas cost while still giving the user a good amount of functionality. Thankfully, Etherscan for example has knowledge of those minimal proxy contracts and even can show you their "read/write contract" UI with the actually available functionality - and additionally they know ENS names as well, so you can go to e.g. etherscan.io/address/kairo.c.cryptostamp.eth and read the code and data of my own collection contract. For connecting that Collection contract with its own NFT, the Collections (CSC) contract could have a translation table between token IDs and contract addresses, but we even went a step further and just set the token ID to the integer value of the address itself - as an Ethereum address is a 40-byte hexadecimal value, this results in large integer numbers (like 675946817706768216998960957837194760936536071597 for mine) but as Ethereum uses 256-bit values by default anyhow, it works perfectly well and no translation table between IDs and addresses is needed. We still do have explicit functions on the main Collections (CSC) contract to get from token IDs to addresses and vice versa, though, even if in our case, it can be calculated directly in both ways.
Both the proxy contract pattern and the address-to-token-ID conversion scheme are optimizations we are using but if we were to standardize collections, those would not be in the core standard but instead to be recommended implementation practices instead.

Image No. 23485

Of course, users do not need to care about those details at all - they just go to crypto.post.at, click "Collections" and create their own collection for there (when logged in via MetaMask or a similar Ethereum browser module), and they also go the the website to look at its contents (e.g. crypto.post.at/collection/kairo). Ideally, they'll also be able to view and trade them on platforms like OpenSea - but the viewing needs specific support (which probably would need standardization to at least be in good progress), and the trading only works well if the platform can deal with NFTs that can change value while they are on auction or the trade market (and then any bids made before need to be invalidated or re-confirmed in some fashion). Because the latter needs a way to detect those value changes and OpenSea doesn't have that, they had to suspend trade for collections for now after someone exploited that missing support by transferring assets out of the collection while it was on auction. That said, there are ideas on how to get this back again the right way but it will need work on both the NFT creator side (us in the specific case of collections) and platforms that support trade, like OpenSea. Most importantly, the meta data of the NFT needs to contain some kind of "fingerprint" value that changes when any property changes that influences the value, and the trading platform needs to check for that and react properly to changes of that "fingerprint" so bids are only automatically processed as long as it doesn't change.

For showing contents or calculating such a "fingerprint", there needs to be a way to find out, which assets the collection actually owns. There are three ways to do that in theory: 1) Have a list of all assets you care about, and look up if the collection address is listed as their owner, 2) look at the complete event log on the blockchain since creation of the collection and filter all NFT Transfer events for ones going to the collection address or away from it, or 3) have some way of so the collection itself can record what assets it owns and allow enumeration of that. Option 1 is working well as long as your use case only covers a small amount of different NFT contracts, as e.g. the Crypto stamp website is doing right now. Option 2 gives general results and is actually pretty feasible with the functionality existing in the Ethereum tool set, but it requires a full node and is somewhat slow.
So, for allowing general usage with decent performance, we actually implemented everything needed for option 3 in the collections contract. Any "safe transfer" of ERC-721 or ERC-1155 tokens (e.g. via a call to the safeTransferFrom() function) - which is the normal way that those are transferred between owners - does actually test if the new owner is a simple account or a contract, and if it actually is a contract, it "asks" if that contract can receive tokens via a contract function call. The collection contract does use that function call to register any such transfer into the collection and puts such received assets into a list. As for transferring away an asset, you need to make a function call on the collection contract anyhow, removing from that list can be done there. So, this list can be made available for querying and will always be accurate - as long as "safe" transfers are used. Unfortunately, ERC-721 allows "unsafe" transfers via transferFrom() even though it warns that NFTs "MAY BE PERMANENTLY LOST" when that function is used. This was probably added into the standard mostly for compatibility with CryptoKitties, which predate this standard and only supported "unsafe" transfers. To deal with that, the collections contract has a function to "sync" ownership, which is given a contract address and token ID, and it adjusts it assets list accordingly by either adding or removing it from there. Note that there is a theoretical possibility to also lose an assets without being able to track it there, that's why both directions are supported there. (Note: OpenSea has used "unsafe" transfers in their "gift" functionality at least in the past, but that hopefully has been fixed by now.)
So, when using "safe" transfers or - when "unsafe" ones are used - "syncing" afterwards, we can query the collection for its owned assets and list those in a generic way, no matter which ERC-721 or ERC-1155 assets are sent to it. As usual, any additional data and meta data of those assets can then be retrieved via their NFT contracts and their meta data URLs.

Image No. 23487

I mentioned a "notification contract" before which can be specified at creation of a collection. When adding or removing an assets from the internal list in the collection, it also calls to that notification contract (if one is set) as a notification of this asset list change. Using that feature, it was possible to award achievements directly on the blockchain for e.g. collecting a certain number of NFTs of a specific type or one of each motif of Crypto stamps. Unfortunately, this additional contract call costs even more gas on Ethereum, as does tracking and awarding of achievements themselves, so rising gas costs forced us to remove that functionality and not set a notification contract for new collections as well as offer an "optimization" feature that would remove it from collections already created with one. This removal made transaction costs for using collections more bearable again for users, though I still believe that on-chain achievements were a great idea and probably a feature that was ahead of its time. We may come back to that idea when it can be done with an acceptably small impact on transaction cost.

One thing I also mentioned before is that the owner of a Collection can actually call functions in other contracts in the name of the Collection, similar to functionality that multisig wallets provide. This is done via an externalCall() function, to which the caller needs to hand over a contract address to call and an encoded payload (which can relatively easily be generated e.g. via the web3.js library). The result is that the Collection can e.g. call the function for Crypto stamps sold via the OnChain shop to have their physical versions sent to a postage address, which is a function that only the owner of a Crypto stamp can call - as the Collection is that owner and its own owner can call this "external" function, things like this can still be achieved.

To conclude, with Crypto stamp Collections we have created a simple but feature-rich solution to bring the experience of physical stamp albums to the digital world, and we see a good possibility to use the same concept generally for collecting NFTs and enabling a whole such collection of NFTs to be transferred or traded easily as one unit. And after all, NFT collectors would probably expect a collection of NFTs or a "stamp album" to have its own NFT, right? I hope we can push this concept to larger adoption in the future!

Von KaiRo, um 01:01 | Tags: blockchain, capacity, Crypto stamp, Ethereum, NFT | keine Kommentare | TrackBack: 1

4. März 2021

Mozilla History Talk @ FOSDEM

The FOSDEM conference in Brussels has become a bit of a ritual for me. Ever since 2002, there has only been a single year of the conference that I missed, and any time I was there, I did take part in the Mozilla devroom - most years also with a talk, as you can see on my slides page.

This year, things were a bit different as for obvious reasons the conference couldn't bring together thousands of developers in Brussels but almost a month ago, in its usual spot, the conference took place in a virtual setting instead. The team did an incredibly good job of hosting this huge conference in a setting completely run on Free and Open Source Software, backed by Matrix (as explained in a great talk by Matthew Hodgson) and Jitsi (see talk by Saúl Ibarra Corretgé).

On short notice, I also added my bit to the conference - this time not talking about all the shiny new software, but diving into the past with "Mozilla History: 20+ Years And Counting". After that long a time that the project exists, I figured many people may not realize its origins and especially early history, so I tried to bring that to the audience, together with important milestones and projects on the way up to today.

Image No. 23488

The video of the talk has been available for a short time now, and if you are interested yourself in Mozilla's history, then it's surely worth a watch. Of course, my slides are online as well.
If you want to watch more videos to dig deeper into Mozilla history, I heavily recommend the Code Rush documentary from when Netscape initially open-sourced Mozilla (also an awesome time capsule of late-90s Silicon Valley) and a talk on early Mozilla history from Mitchell Baker that she gave at an all-hands in 2012.
The Firefox part of the history is also where my song "Rock Me Firefox" (demo recording on YouTube) starts off, for anyone who wants some music to go along with all this! ;-)

While my day-to-day work is in bleeding-edge Blockchain technology (like right now figuring out Ethereum Layer 2 technologies, like Optimism), it's sometimes nice to dig into the past and make sure history never forgets the name - Mozilla.

And, as I said in the talk, I hope Mozilla and its mission have at least another successful 20 years to go into the future!

Von KaiRo, um 23:41 | Tags: FOSDEM, history, Mozilla, Tech Speakers | keine Kommentare | TrackBack: 1

6. April 2020

Sending Encrypted Messages from JavaScript to Python via Blockchain

Image No. 23482

Last year, I worked with the Capacity team on the Crypto stamp project, the first physical postage stamp with a unique digital twin, issued by the Austrian Postal Service (Österreichische Post AG). Those stamps are mainly intended as collectibles, but their physical "half" can be used as valid postage on packages or letters, and a QR code on that physical stamp links to a website presenting the digital collectible. Our job (at Capacity Blockchain Solutions) was to build that digital collectible, the website at crypto.post.at, and the back-end service delivering both public meta data and the back end for the website. I specifically did most of the work on the Ethereum Smart Contract for the digital collectible, a "non-fungible token" (NFT) using the ERC-721 standard (publicly visible), as well as the back-end REST service, which I implemented in Python (based on Flask and Web3.py). The coding for the website was done by colleagues, of course using JavaScript for the dynamic elements.

Image No. 23481

One feature we have in this project is that people can purchase Crypto stamps directly from the blockchain, with the website guiding those with an Ethreum-enabled browser (e.g. with the MetaMask add-on) through that. By sending Ether cryptocurrency to the right address (the OnChainShop contract), they will directly receive the digital NFT - but then, every Crypto stamp consists of both a digital and physical item, so what about the physical part?
Of course, we cannot send a physical item to an Ethereum address (which is just a mostly-random number) so we needed a way for the owner of the NFT to give us (or actually Post AG) a postal address to send the physical stamp to. For this, we added a form to allow them to enter the postal address for stamps that were bought via the OnChain shop - but then the issue arose of how would we would verify that the sender was the actual owner of the NFT. Additionally, we had to figure out how do we do this without requiring a separate database or authentication in the back end, as we also did not need those features for anything else, since authentication for purchases are already done via signed transactions on the blockchain, and any data that needs to be stored is either static or on the blockchain.

We can easily verify the ownership if we send the information to a Smart Contract function on the blockchain, given that the owner has proven to be able to do such calls by purchasing via the OnChain shop already, and anyone sending transactions there has to sign those. To not need to store the whole postage address in the blockchain state database, which is expensive, we just emit an event and therefore put it in the event log, which is much cheaper and can still be read by our back end service and forwarded to Post AG. But then, anything sent to the public Ethereum blockchain (no matter if we put it into state or logs afterwards) is also visible to everyone else, and postal address are private data, so we need to ensure others reading the message cannot actually read that data.
So, our basic idea sounded simple: We generate a public/private key pair, use the public key to encrypt the postage address on the website, call a Smart Contract function with that data, signed by the user, emit an event with the data, and decrypt the information on the back-end service when receiving the event, before forwarding it to the actual shipping department in a nice format. As someone who has heard a lot about encryption but not actually coded encryption usage, I was surprised how many issues we ran into when actually writing the code.

So, first thing I did was seeing what techniques there are for sending encrypted messages, and pretty soon I found ECIES and was enthusiastic that sending encrypted messages was standardized, there are libraries for this in many languages and we just need to use implementations of that standard on both sides and it's all solved! Yay!
So I looked for ECIES libraries, both for JavaScript to be used in the browser and for Python, making sure they are still maintained. After some looking, I settled for eccrypto (JS) and eciespy, which both sounded pretty decent in usage and being kept up to date. I created a private/public key pair, trying to encrypt back and forth via eccrypto worked, so I went for trying to decrypt via eciespy, with great hope - only to see that eccrypto.encrypt() results in an object with 4 member strings while eciespy expects a string as input. Hmm.

With some digging, I found out that ECIES is not the same as ECIES. Sigh. It's a standard in terms of providing a standard framework for encrypting messages but there are multiple variants for the steps in the standardized mechanism, and both sides (encryption and decryption) need to agree on using the same to make it work correctly. Now, both eccrypto and eciespy implement exactly one variant, and of course two different ones, of course. Things would have been too easy if the implementations would be compatible, right?

So, I had to unpack what ECIES does to understand better what happens there. For one thing, ECIES basically does an ECDH exchange with the receiver's public key and a random "ephemeral" private key to derive a shared secret, which is then used as the key for AES-encrypting the message. The message is sent over to the recipient along with the AES parameters (IV, MAC) and the "ephemeral" public key. The recipient can use that public key along with their private key in ECDH, get the same shared secret, and do another round of AES with the given parameters to decrypt (as AES is symmetric, i.e. encryption and decryption are the same operation).

While both libraries use the secp256k1 curve (which incidentally is also used by Ethereum and Bitcoin) for ECDH, and both use AES-256, the main difference there, as I figured, is the AES cipher block mode - eccrypto uses CBC while eciespy uses GCM. Both modes are fine for what we are doing here, but we need to make sure we use the same on both sides. And additional difference is that eccrypto gives us the IV, MAC, ciphertext, and ephemeral public key as separate values while eciespy expects them packed into a single string - but that would be easier to cope with.

In any case, I would need to change one of the two sides and not use the simple-to-use libraries. Given that I was writing the Python code while my collegues working on the website were already busy enough with other feature work needed there, I decided that the JavaScript-side code would stay with eccrypto and I'd figure out the decoding part on the Python side, taking apart and adapting the steps that ecies would have done.
We'd convert the 4 values returned from eccrypto.encrypt() to hex strings, stick them into a JSON and stringify that to hand it over to the blockchain function - using code very similar to this:
var data = JSON.stringify(addressfields);
var eccrypto = require("eccrypto");
eccrypto.encrypt(pubkey, Buffer(data))
.then((encrypted) => {
  var sendData = {
    iv: encrypted.iv.toString("hex"),
    ephemPublicKey: encrypted.ephemPublicKey.toString("hex"),
    ciphertext: encrypted.ciphertext.toString("hex"),
    mac: encrypted.mac.toString("hex"),
  };
  var finalString = JSON.stringify(sendData);
  // Call the token shipping function with that final string.
  OnChainShopContract.methods.shipToMe(finalString, tokenId)
  .send({from: web3.eth.defaultAccount}).then(...)...
};

So, on the Python side, I went and took the ECDH bits from eciespy, and by looking at eccrypto code as an example and the relevant Python libraries, implemented code to make AES-CBC work with the data we get from our blockchain event listener. And then I found out that it still did not work, as I got garbage out instead of the expected result. Ouch. Adding more debug messages, I realized that the key used for AES was already wrong, so ECDH resulted in the wrong shared secret. Now I was really confused: Same elliptic curve, right public and private keys used, but the much-proven ECDH algorithm gives me a wrong result? How can that be? I was fully of disbelief and despair, wondering if this could be solved at all.
But I went for web searches trying to find out why in the world ECDH could give different results on different libraries that all use the secp256k1 curve. And I found documents of that same issue. And it comes down to this: While standard ECDH returns the x coordinate of the resulting point, the libsecp256k1 developers (I believe that's a part of the Bitcoin community) found it would be more secure to instead return the SHA256 hash of both coordinates of that point. This may be a good idea when everyone uses the same library, but eccrypto uses a standard library while eciespy uses libsecp256k1 - and so they disagree on the shared secret, which is pretty unhelpful in our case.

In the end, I also replaced the ECDH pieces from eciespy with equivalent code using a standard library - and suddenly things worked! \o/
I was fully of joy, and we had code we could use for Crypto stamp - and since the release in June 2019, this mechanism has been used successfully for over a hundred shipments of stamps to postal addresses (note that we had a limited amount available in the OnChainShop).

So, here's the Python code used for decrypting (we pip install eciespy cryptography in our virtualenv - not sure if eciespy is still needed but it may for dependencies we end up using):
from Crypto.Cipher import AES
import hashlib
import hmac
from cryptography.hazmat.primitives.asymmetric import ec
from cryptography.hazmat.backends import default_backend

def ecies_decrypt(privkey, message_parts):
    # Do ECDH via the cryptography module to get the non-libsecp256k1 version.
    sender_public_key_obj = ec.EllipticCurvePublicNumbers.from_encoded_point(ec.SECP256K1(), message_parts["ephemPublicKey"]).public_key(default_backend())
    private_key_obj = ec.derive_private_key(Web3.toInt(hexstr=privkey),ec.SECP256K1(), default_backend())
    aes_shared_key = private_key_obj.exchange(ec.ECDH(), sender_public_key_obj)
    # Now let's do AES-CBC with this, including the hmac matching (modeled after eccrypto code).
    aes_keyhash = hashlib.sha512(aes_shared_key).digest()
    hmac_key = aes_keyhash[32:]
    test_hmac = hmac.new(hmac_key, message_parts["iv"] + message_parts["ephemPublicKey"] + message_parts["ciphertext"], hashlib.sha256).digest()
    if test_hmac != message_parts["mac"]:
        logger.error("Mac doesn't match: %s vs. %s", test_hmac, message_parts["mac"])
        return False
    aes_key = aes_keyhash[:32]
    # Actual decrypt is modeled after ecies.utils.aes_decrypt() - but with CBC mode to match eccrypto.
    aes_cipher = AES.new(aes_key, AES.MODE_CBC, iv=message_parts["iv"])
    try:
        decrypted_bytes = aes_cipher.decrypt(message_parts["ciphertext"])
        # Padding characters (unprintable) may be at the end to fit AES block size, so strip them.
        unprintable_chars = bytes(''.join(map(chr, range(0,32))).join(map(chr, range(127,160))), 'utf-8')
        decrypted_string = decrypted_bytes.rstrip(unprintable_chars).decode("utf-8")
        return decrypted_string
    except:
        logger.error("Could not decode ciphertext: %s", sys.exc_info()[0])
        return False

So, this mechanism has caused me quite a bit of work and you probably don't want to know the word I shouted at my computer at times while trying to figure this all out, but the results works great, and if you are ever in need of something like this, I hope I could shed some light on how to achieve it!
For further illustration, here's a flow graph of how the data gets from the user to Post AG in the end - the ECIES code samples are highlighted with light blue, all encryption-related things are blue in general, red is unencrypted data, while green is encrypted data:
Image No. 23484
Thanks to Post AG and Capacity for letting me work on interesting projects like that - and keep checking crypto.post.at for news about the next iteration of Crypto stamp!

Von KaiRo, um 17:04 | Tags: blockchain, capacity, Crypto stamp, ECIES, encrytion, Ethereum, JavaScript, Python | 1 Kommentar | TrackBack: 1

5. März 2020

Picard Filming Sites: Season 1, Part 1

Ever since I was on a tour to Star Trek filming sites in 2016 with Geek Nation Tours and Larry Nemecek, I've become ever more interested in finding out to which actual real-world places TV/film crews have gone "on location" and shot scenes for our favorite on-screen stories. While the background of production of TV and film is of interest to me in general, I focus mostly on everything Star Trek and I love visiting locations they used and try to catch pictures that recreate the base setting of the shots in the production - but just the way the place looks "in the real world" and right now.
This has gone as far as me doing several presentations about the topic - two of which (one in German, one in English language) I will give at this year's FedCon as well, and creating an experimental website at filmingsites.com where I note all locations used in Star Trek productions as soon as I become aware of them.

In the last few years, around the Star Trek Las Vegas Conventions, I did get the chance to have a few days traveling around Los Angeles and vicinity, visit a few locations and take pictures there. And after Discovery being filmed up in the Toronto area (and generally using quite few locations outside the studios), Picard is back producing in Southern California and using plenty of interesting places! And now with the first half of season 1 in the books (or at least ready to watch for us via streaming), here are a few filming sites I found in those episodes:

Image No. 23473
And we actually get started with our first location (picture is a still from the series) in "Remembrance" right after Picard wakes up from the "cold open" dream sequence: Château Picard was filmed at Sunstone Winery's Villa this time (after different places were used in its TNG appearances). The Winery's general manager even said "We encourage all the Trekkies and Trekkers to come visit us." - so I guess I'll need to put it in my travels plans soon. :)

Another one I haven't seen yet but will need to put in my plans to see is One Culver, previously known as Sony Pictures Plaza. That's where the scenes in the Daystrom Institute were shot - interestingly, in walking distance to the location of the former Desilu Culver soundstages (now "The Culver Studios") and its backlot (now a residential area), where the original Star Trek series shot its first episodes and several outdoor scenes of later ones as well. One Culver's big glass front structure and the huge screen on its inside are clearly visible multiple times in Picard's Daystrom Institute scenes, as is the rainbow arch behind it on the Sony Studios parking lot. Not having been there, I could only include a promotional picture from their website here.
Image No. 23476

Now a third filming site that appears in "Remembrance" is actually one I do have my own pictures of: After seeing the first trailer for Picard and getting a hint where that building depicted that clip is, I made my way last summer to a place close to Disneyland and took a few pictures of Anaheim Convention Center. Walking by to the main entrance, I found the attached Arena to just look good, so I also got one shot of that one in - and then I see that in this episode, they used it as the Starfleet Archive Museum!
Of course, in the second episode, "Maps and Legends", we then see the main entrance, where Picard goes to meet the C-in-C, so presumably Starfleet headquarters. It looks like the roof scenes with Dahj would actually be on the same building, on satellite pictures, there seems to be an area with those stairs South of the main entrance. I'm still a bit sad though that Starfleet seems to have moved their headquarters and it's not the Tillman administration building any more that was used in previous series (actually, for both headquarters and the Academy - so maybe it comes back in some series as the Academy, with its beautiful Japanese garden).
Image No. 23474 Image No. 23475

Of course, at the end of this episode we get to Raffi's home, and we stay there for a bit and see more of it in "The End is the Beginning". The description in the episode tells us it's located at a place called "Vasquez Rocks" - and this time, that's actually the real filming site! Now, Trekkies know this of course, as a whole lot of Trek has been filmed there - most famously the fight between Kirk and the Gorn captain in "Arena". Vasquez Rocks has surely been of the most-used Star Trek filming sites over the years, though - at least before Picard - I'd say that it ranked second behind Bronson Canyon. How what's nowadays a Natural Area park becomes a place to live in by 2399 is up to anyone's speculation. ;-)
Image No. 23479 Image No. 23480

I guess in the 3 introductory episodes we had more different filming sites than in any of the two whole seasons of Discovery seen so far, but right in the next episode, Absolute Candor, we got yet another interesting place! A lot of that episode plays on the planet Vashti, with three sets of scenes on their main place with the bar setting: In the "cold open" / flashback, when Picard beams down to the planet again in the show's present, and before he leaves, including the fight scene. Given that there were multiple hints of shooting taking place at Universal Studios Hollywood, and the sets having a somewhat familiar look, more Mexican than totally alien, it did not take long to identify where those scenes were filmed: It's the standing "Mexican Street" / "Old Mexico Place" set on Universal's backlot - which you usually can visit with the Studio Tour as an attraction of their Theme Park. The pictures, of the bar area, and basically from there in the direction of Picard's beam-in point, are from a one of those tours I took in 2013.
Image No. 23477 Image No. 23478

In the following two episodes, I could not make out any filming sites, so I guess they pretty much filmed those at Santa Clarita Studios where the production of the series is based. I know we will have some location(s) to talk about in the second half of the season though - not sure if there's as many as in the first few episodes, but I hope we'll have a few good ones!

Von KaiRo, um 00:25 | Tags: filming sites, photos, Star Trek, travel | keine Kommentare | TrackBack: 0

6. Februar 2020

FOSDEM, and All Those 20's

I've been meaning to blog again for some time, and just looked in disbelief at the date of my last post. Yes, I'm still around. I hope I get to write more often in the future.

Ludo just posted his thoughts on FOSDEM, which I also attended last weekend as a volunteer for Mozilla. I have been attending this conference since 2002, when it first went by that exact name, and since then AFAIK only missed the 2010 edition, giving talks in the Mozilla dev room almost every year - though funnily enough, in two of the three years where I've been a member of the Mozilla Tech Speakers program, my talks were not accepted into that room, while I made it all the years before. In fact, that's more telling a story of how interested speakers are in getting into this room nowadays, while in the past there were probably fewer submissions in total. So, this year I helped out Sunday's Mozilla developer room by managing the crowd entering/leaving at the door(s), similar to what I did in the last few years, and given that we had fewer volunteers this year, I also helped out at the Mozilla booth on Saturday. Unfortunately, being busy volunteering on both days meant that I did not catch any talks at all at the conference (I hear there were some good ones esp. in our dev room), but I had a number of good hallway and booth conversations with various people, esp. within the Mozilla community - be it with friends I had not seen for a while, new interesting people within and outside of Mozilla, or conversations clearing up lingering questions.

Image No. 23467 Image No. 23470 Image No. 23464 Image No. 23468
(pictures by Rabimba & Bob Chao)

Now, this was the 20th conference by the FOSDEM team (their first one went by "OSDEM", before they added the "F" in 2002), and the number 20 is coming up for me all over the place - not just that it works double duty in the current year's number 2020, but even in the months before, I started my row of 20-year anniversaries in terms of my Mozilla contributions: first bug reported in May, first contribution contact in December, first German-language Mozilla suite release on January 1, and will will continue with the 20th anniversaries of my first patches to shared code this summer - see 'My Web Story' post from 2013 for more details. So, being part of an Open-Source project with more than 20 years of history, celebrating a number of 20th anniversaries in that community, I see that number popping up quite a bit nowadays. Around the turn of the century/millennium, a lot of change happened, for me personally but all around as well. Since then, it has been a whirlwind, and change is the one constant that really stayed with me and has become almost a good friend. A lot of changes are going on in the Mozilla community right now as well, and after a bit of a slump and trying to find my new place in this community (since I switched back from staff to volunteer in 2016), I'm definitely excited again to try and help building this next chapter of the future with my fellow Mozillians.

There's so much more going around in my mind, but for now I'll leave it at that: In past times, when I was invited as volunteer or staff, the Mozilla Summits and All-hands were points that energized me and gave me motivation to push forward on making Mozilla better. This year, FOSDEM, with my volunteering and the conversations I had, did the same job. Let's build a better Internet and a better Mozilla community!

Von KaiRo, um 14:02 | Tags: community, FOSDEM, Mozilla, Tech Speakers | keine Kommentare | TrackBack: 1

13. Juli 2018

VR Map - A-Frame Demo using OpenStreetMap Data

As I mentioned previously, the Mixed Reality "virus" has caught me recently and I spend a good portion of my Mozilla contribution time with presenting and writing demos for WebVR/XR nowadays.

The prime driver for writing my first such demo was that I wanted to do something meaningful with A-Frame. Previously, I had only played around with the Hello WebVR example and some small alterations around the basic elements seen in that one, which is also pretty much what I taught to others in the WebVR workshops I held in Vienna last year. Now, it was time to go beyond that, and as I had recently bought a HTC Vive, I wanted something where the controllers could be used - but still something that would fall back nicely and be usable in 2D mode on a desktop browser or even mobile screens.

While I was thinking about what I could work on in that area, another long-standing thought crossed my mind: How feasible is it to render OpenStreetMap (OSM) data in 3D using WebVR and A-Frame? I decided to try and find out.

Image No. 23346Image No. 23344Image No. 23338

First, I built on my knowledge from Lantea Maps and the fact that I had a tile cache server set up for that, and created a layer of a certain set of tiles on the ground to for the base. That brought me to a number of issue to think about and make decisions on: First, should I respect the curvature of the earth, possibly put the tiles and the viewer on a certain place on a virtual globe? Should I respect the terrain, especially the elevation of different points on the map? Also, as the VR scene relates to real-world sizes of objects, how large is a map tile actually in reality? After a lot of thinking, I decided that this would be a simple demo so I would assume the earth is flat - both in terms of curvature or "the globe" and terrain, and the viewer would start off at coordinates 0/0/0 with x and z coordinates being horizontal and y the vertical component, as usual in A-Frame scenes. For the tile size, I found that with OpenStreetMap using Mercator projection, the tiles always stayed squares, with different sizes based on the latitude (and zoom level, but I always use the same high zoom there). In this respect, I still had to take account of the real world being a globe.

Once I had those tiles rendering on the ground, I could think about navigation and I added teleport controls, later also movement controls to fly through the scene. With W/A/S/D keys on the desktop (and later the fly controls), it was possible to "fly" underneath the ground, which was awkward, so I wrote a very simple "position-limit" A-Frame control later on, which prohibits that and also is a very nice example for how to build a component, because it's short and easy to understand.

All this isn't using OSM data per se, but just the pre-rendered tiles, so it was time to go one step further and dig into the Overpass API, which allows to query and retrieve raw geo data from OSM. With Overpass Turbo I could try out and adjust the queries I wanted to use ad then move those into my code. I decided the first exercise would be to get something that is a point on the map, a single "node" in OSM speak, and when looking at rendered maps, I found that trees seemed to fit that requirement very well. An Overpass query for "node[natural=tree]" later and some massaging the result into a format that JavaScript can nicely work with, I was able to place three-dimensional A-Frame entities in the places where the tiles had the symbols for trees! I started with simple brown cylinders for the trunks, then placed a sphere on top of them as the crown, later got fancy by evaluating various "tags" in the data to render accurate height, crown diameter, trunk circumference and even a different base model for needle-leaved trees, using a cone for the crown.

But to make the demo really look like a map, it of course needed buildings to be rendered as well. Those are more complex, as even the simpler buildings are "ways" with a variable amount of "nodes", and the more complex ones have holes in their base shape and therefore require a compound (or "relation" in OSM speak) of multiple "ways", for the outer shape and the inner holes. And then, the 2D shape given by those properties needs to be extruded to a certain height to form an actual 3D building. After finding the right Overpass query, I realized it would be best to create my own "building" geometry in A-Frame, which would get the inner and outer paths as well as the height as parameters. In the code for that, I used the THREE.js library underlying A-Frame to create a shape (potentially with holes), extrude it to the right height and rotate it to actually stand on the ground. Then I used code similar to what I had for trees to actually create A-Frame entities that had that custom geometry. For the height, I would use the explicit tags in the OSM database, estimate from its levels/floors if given or else fall back to a default. And I would even respect the color of the building if there was a tag specifying it.

With that in place, I had a pretty nice demo that uses data directly from OpenStreetMap to render Virtual Reality scenes that could be viewed in the desktop or mobile browser, or even in a full VR headset!

It's available under the name of "VR Map" at vrmap.kairo.at, and of course the source code can also be expected, copied and forked on GitHub.

Image No. 23343

Again, this is intended as a demo, not a full-featured product, and e.g. does at this time only render an area of a defined size and does not include any code to load additional scenery as you are moving around. Also, it does not support "building parts", which are the way to specify in OSM that a different pieces of a building have e.g. different heights or colors. It could also be extended to actually render models of the buildings when they exist and are referred in the database (so e.g. the Eiffel Tower would look less weird when going to the Paris preset). There are a lot of things that still can be done to improve on this demo for sure, but as it stands, it's a pretty simple piece of code that shows the power of both A-Frame and the OpenStreetMap data, and that's what I set out to do, after all.

My plan is to take this to multiple meetups and conferences to promote both underlying projects and get people inspired to think about what they can do with those ideas. Please let me know if you know of a good event where I can present this work. The first of those presentations happened a at the ViennaJS May Meetup, see the slides and video.
I'm also in an email conversation with another OSM contributor who is using this demo as a base for some of his work, e.g. on rendering building models in 3D and VR and allowing people to correct their position data.

Image No. 23347

I hope that this demo spawns more ideas of what people can do with this toolset, and I'll also be looking into more demos that will probably move into different directions. :)

Von KaiRo, um 23:28 | Tags: A-Frame, Mixed Reality, Mozilla, OSM, VR Maps, WebVR, WebXR | keine Kommentare | TrackBack: 1

11. Juli 2018

My Journey to Tech Speaking about WebVR/XR

Ever since a close encounter with burning out (thankfully, I didn't quite get there) forced me to leave my job with Mozilla more than two years ago, I have been looking for a place and role that feels good for me in the Mozilla community. I immediately signed up to join Tech Speakers as I always loved talking about Mozilla tech topics and after all breaking down complicated content and communicating it to different groups is probably my biggest strength - but finding the topics I want to present at conferences and other events has been a somewhat harder journey.

I knew I had to keep my distance to crash stats, despite knowing the area in and out and having developed some passion for it, but staying in the same area as a volunteer than in a job that almost burned me out was just not a good idea, from multiple points of view. I thought about building up some talks about working with data but it still was a bit too close to that past and not what I presently do a lot (I work in blockchain technology mostly today), so that didn't go far (but maybe it will happen at some point).
On the other hand, I got more and more interested in some things the Open Innovation group at Mozilla was doing, and even more in what the Emerging Technologies teams bring into the Mozilla and web sphere. My talk (slides) at this year's local "Linuxwochen Wien" conference was a very quick run-through of what's going on there and it's a whole stack of awesomeness, from Mixed Reality via codecs, Rust, Voice and whatnot to IoT. I would love to dig a bit into the latter but I didn't yet find the time.

What I did find some time for is digging into WebVR (now WebXR, where "XR" means "Mixed Reality") and the A-Frame library that Mozilla has created to make it dead simple to create your own VR/XR experiences. Last year I did two workshops in Vienna on that area, another one this year and I'm planning more of them. It's great how people with just some HTML knowledge can build something easily there as well as people who are more into JS programming, who can dig even deeper. And the immersiveness of VR with a real headset blows people away again and again in any case, so a good thing to show off.

While last year I only had cardboards with some left-over Sony Z3C phones (thanks to Mozilla) to show some basic 3DoF (rotation only) VR with low resolution, this proved to be interesting already to people I presented to or made workshops with. Now, this year I decided to buy a HTC Vive, seeing its price go down somewhat before the next generation of headsets would be shipped. (As a side note, I chose the Vive over the Rift because of Linux drivers being available and because I don't want to give money to Facebook.) Along with a new laptop with a high-end GPU that can drive the VR headset, I got into fully immersive 6DoF VR and, I have to say, got somewhat addicted to the experience. ;-)

Image No. 23334 Image No. 23341 Image No. 23338

I ran a demo booth with A-Painter at "Linuxwochen Wien" in May, and people were both awed at the VR experience and that this was all running in plain Firefox! Spreading the word about new web technologies can be really fun and rewarding with experiences like that! Next to showing demos and using VR myself, I also got into building WebVR/XR demos myself (I'm more the person to do demos and prototypes and spread the word, rather than building long-lasting products) - but I'll leave that to another blog post that will be upcoming very soon! :)

So, for the moment, I have found a place I feel very comfortable with in the community, doing demos and presentations about WebVR or "Mixed Reality" (still need to dig into AR but I don't have fitting hardware for that yet) as well as giving people and overview of the Emerging Technologies "we" (MoCo and the Mozilla community) are bringing to the web, and trying to make people excited and use the technologies or hopefully even contribute to them. Being at the forefront of innovation for once feels really good, I hope it lasts long!

Von KaiRo, um 21:41 | Tags: A-Frame, Emerging Technologies, Mixed Reality, Mozilla, VR Maps, WebVR, WebXR | keine Kommentare | TrackBack: 1

Feeds: RSS/Atom