Electron's first meetup
August 28, 2015 | Jeff Weinstein

The Electron community is positively charged! The project, formerly known as Atom-Shell, is a great way to build cross-platform desktop apps using web technologies. Given the 2200 people in the Electron Slack channel, the nearly five thousand commits, and the active discussion board, we figured it’s about time the project had a meetup in real life.

The community seems to agree: we hosted more than 50 people for the inaugural Bay Area Electron Meetup for four great talks and a big pile of pizza. Engineers from GitHub, Slack, Wagon, and Nylas spoke about building polished apps across Mac, Windows, and Linux using the Electron framework.

  1. The History of Electron by Kevin Sawicki (Github)
  2. Integrating with Native Code by Paul Betts (Slack)
  3. Electron, React, and Haskell. Oh my! by Mike Craig (Wagon)
  4. Making a web app feel native by Ben Gotow (Nylas)

The History of Electron by Kevin Sawicki (Engineer at GitHub)

Why Electron was built, what makes it special, how it differs from similar frameworks, and where it is going.

The History of electron

Integrating with Native Code by Paul Betts (Engineer at Slack)

How Slack’s Desktop app calls native operating system and library methods on OS X, Windows, and Linux, via Node Native Modules, node-ffi, and edge.js.

Native Modules in Electron

Electron, React, and Haskell. Oh my! by Mike Craig (CTO at Wagon)

How Wagon is building a hybrid desktop/web data analytics app using Electron, React, and Haskell to solve engineering challenges that aren’t possible with the browser alone.

Electron, React, and Haskell Oh My!

Making a web app feel native by Ben Gotow (Engineer at Nylas)

A dive into some JavaScript and CSS tricks the Nylas team uses to make their Electron-based mail app feel native across Mac, Windows, and Linux.

Building Native Experiences with Electron

If you’d like to come to the next one, join the Electron meetup group and we’ll see you soon.

Dear AWS Big Data Blog
August 13, 2015 | Andy Granowitz

We love Redshift and we love R. So we were delighted to see an AWS post about how to connect Redshift to R. Petabytes of data plus all the statistical models you can imagine? I’m in!

Wagon is a great way to use Redshift.

Unfortunately, their recommended setup instructions are awfully cumbersome, in large part due to the 12 unfriendly steps required to connect SQL Workbench/J. In Wagon, connecting to Redshift is one step and requires no complex configuration: just copy and paste your connection details. Now, you’re ready to run those same Redshift queries in Wagon. SQL Workbench/J no longer needed.

While R offers an incredible collection of statistics and visualization libraries, it can often be more than you need for basic exploration and analysis. Also, many analysts find R to be overwhelming and unneeded for most their work. In fact, the data manipulation and visualization in the blog post of flight delays by month can be recreated in Wagon with one query and a quick drag and drop chart. You don’t need to be a command line wizard, just a little SQL curious!

Hopefully this post saves you time when you’re interacting with Redshift in R, or even if you are looking to run some more custom queries against your Redshift cluster. Sign up for early access to Wagon if you want try for yourself. Gogogo!

Oh! And, we’re working on a deep integration with R. If you have strong opinions about it, join us at band.wagonhq.com

The Serendipity Machine
August 11, 2015 | Jeff Weinstein

We enjoy wonderful little surprises every day at Wagon: serendipitous run-ins, small-world moments, and hilarious coincidences. Whether it’s the cadre of visitors we host for lunch, the tipsy bar patron who is secretly a SQL analyst, or the friend-of-a-friend lurking within our teammate’s apartment asking: “Do you work at that Haskell data startup?” - we’re amazed by where our fellow thinkers are hiding. The real secret is that these things, which might appear random, happen for a reason: we constantly seek to talk with people and bring them together. Serendipity doesn’t happen; we make it.

Secret Worlds by xkcd. Secret Worlds by xkcd

We’re fortunate to have a wide network of people: users, open-source contributors, recruits, neighbors, partners, friends of friends, out-of-town visitors, investors, etc… It’s invigorating to hear stories from these different groups; their experiences motivate us and give context to the problems we’re solving. We talk with hundreds of people each day through Intercom chat, our public #bandwagon Slack channel, and on Twitter.

A happy surprise about early-stage startups is that you’ll meet more people than at a large company. Your network isn’t just your immediate team and your chain of managers, it’s anyone, anywhere.

We create the space for serendipitous encounters through tech meetups, our #bandwagon happy hours, open invites to lunch, our high touch user communication, and Wagon’s faux-EIR program. Hosting events at our space in San Francisco is a great way to bring people together. Almost every week, we have 40+ people at our space for an after-work event. To make sure things go smoothly, events even have their own checklists: “In Event of Meetup” and “In Case of Party”. We maintain a mini-CRM so we can throw impromptu events in other cities - we had a lot of fun hosting a happy hour in New York with Wildcard. We have an open space facing Valencia Street with views of Sutro Tower. Our Wagon office is configured for flexibility rather than posh — we hope you’ll find it welcoming.

Upcoming Wagon Events

Bay Area Haskell Meetup - Wed, Aug 12 at 6pm

Join us for three half-hour Haskell talks about distributed code, graphs, and more. Matt DeLand (Wagon), Tikhon Jelvis (Esper), and John Chee (Twitter) will be speaking at our next Haskell event.

Wharton Customer Analytics Meetup - Tue, Aug 18 at 6:30pm

With two Penn grads on the team, we’re excited to host the next Bay Area Wharton and Penn analytics event! There will be a few 5 minute “lightning talks” from alumni working on applications of customer analytics.

Electron Meetup - Tue, Aug 25 at 6pm

Learn from Kevin Sawicki (GitHub), Paul Betts (Slack), Mike Craig (Wagon), and Ben Gotow (Nylas) about how to make polished apps across Mac, Windows, and Linux using the open source project Electron (formerly Atom-Shell). We’ll have four 20-minute tech talks. Come celebrate the community’s first official meetup in San Francisco! There will be pizza.

#bandwagon Happy Hour - Wed, Sept 2 at 6pm

Come to our monthly happy hour at our office in San Francisco. It’s a great way to meet other data folks and friends of Wagon. It’s very relaxed so feel free to just pop in between 6 - 9pm, Wed Sept 2nd.

Joe Nelson's fireside chat at the recent Haskell meetup at Wagon.
Wagon hosts monthly happy hours for our users and friends.

Joe Nelson’s fireside Wagon Haskell chat and a recent #bandwagon happy hour at our office in San Francisco.

Thanks to Mission Bicycle Company (our neighbors), Mom & Daughters Chair and Table Rentals (just-in-time chair rental), and Instacart (official Tecate supplier) for making these events possible.

We hope to see you soon! If you’d like to be our next Wagon faux-EIR or want to stop by to say hi, just email us at hello@wagonhq.com or give us a tweet at @WagonHQ.

Building an Analytics Pipeline in 2015
August 06, 2015 | Andy Granowitz

Every company needs an analytics data platform queryable by SQL.

Using a single analytics tool or relying on logs from a single source is a fast way to get started but is rarely sufficient. You’ll realize you need a better data strategy when attempting more detailed analytics tasks: cohorting customers based on segments available in multiple data sources, analyzing long time scale trends, or making data available to other applications. Unfortunately, you’ll quickly reach the limit of your off-the-shelf tool.

There has been a dramatic increase in data being created and fascination with Big Data, but less of a corresponding uptick in how to capture its value. Engineering a system to ingest, transform, and process data from many (changing, flaky) sources has been a long time, Very Hard Problem™. Doing this well requires hard work – the dreaded ETL.

We see more and more companies choosing to invest in SQL warehouses and the requisite engineering well before they become large businesses. How do you effectively build one of these things? And what makes building robust analytics infrastructure difficult?

Google Trends for Big Data vs. ETL See full Google Trends report

Here’s an example illustrating the core problems: You implemented a new purchase flow in your app and you’d like to understand conversion rates (tracked from logs) broken down by your email marketing A/B test (tracked from a 3rd party). The log lines you’re generating have new structure and may need to be re-parsed to fit into your existing schema. The A/B testing info may live in a different place than user data. Boiler plate reporting tools and drag and drop analytics UIs are great, but they require structuring ahead of time and the new checkout flow change is already live in production. Manually doing this analysis one time is annoying, but turning it into a reliable, repeatable practice is nearly impossible without dedicated engineering effort.

Your goal should be to get your data into a data warehouse that can be queried directly by people and programs. While it’s not straightforward, it’s important to understand the pieces. We see companies addressing this problem by focusing on the following steps:

  1. For each data source: generate, collect, and store your data
  2. Transform data into usable, queryable form
  3. Copy multiple sources into a single place
  4. Enjoy the data fruits of your data labor

The first step is collecting the data with as much structure as possible. You need to generate the data, transmit it from apps, browsers, or services for collection, and then safely store it for later. Many mobile and web analytics providers offer these three steps, others focus on a subset. For example, Heap and Mixpanel generate many app usage events automatically. Others focus on receiving data and making it available to read later (Keen and Splunk as different examples). Segment takes advantage of the difficulty of logging to many places by transmitting data to many of the above services with one API call.

Another large source of data is logs (usually messy and unstructured). Just having logs is not enough - it must be massaged into usable rows and columns. Some log lines help engineers analyze technology performance or debug errors, some log lines must be interpreted to signal “human” events, and some log lines have been around for so long that no one remembers why they’re there. Logs are rarely generated with their end purpose or a fixed type system in mind. Transformation of these raw strings is necessary to make them usable rather than just searchable.

For example, you may need to combine three separate log lines in order to signal a successful-user-flow, or to compare log lines against prior data to understand if a user is new, active or re-activated. Or maybe you need to remove those extra pesky spaces around your beautiful integers or standardize timestamps across timezones. Trifacta, Paxata, and Tamr offer technical tools for transforming ugly log forms to structured rows and columns. Or you’ll roll your own.

Dilbert Cartoon

Once data collection systems are in place, you want to get this data flowing into a data warehouse. While some of the aforementioned tools provide their own interface for accessing collected and processed data, joining across multiple sources is difficult if not impossible, and their interfaces are often inflexible and cumbersome. Luckily, many of these services recognize this, and offer easy exports to data warehouses. Amplitude and Segment do this well and offer straightforward exports to Redshift. Google Analytics offers export to BigQuery for Premium customers (~$150k / year). Others make it possible, but require a bit of work (for example, Keen). New startups like Textur and Alooma are working on plumbing data into hosted RDBMS systems.

Outside of dedicated analytics solutions, you often have data from third party sources you’d like to join and analyze (e.g. Salesforce, ZenDesk, MailChimp, etc.). Most of these tools offer APIs to extract data. Building and maintaining these connections from 3rd parties to your data warehouse on your own is doable, but this is often where data integration tools are helpful. Services like Snowplow and Fivetran help.

At this point, data is flowing in a structured way. But where is it going? When assessing a data warehouse, look for:

  1. Large scale ingestion and storage
  2. Fast computation and multi-user concurrency
  3. Easy management and scaling with data
  4. Security and permissioning

There are many that meet these criteria: Amazon Redshift, Google BigQuery, Microsoft Azure SQL Data Warehouse, Hive, Spark, Greenplum, Vertica, Impala (the list goes on!). The largest technology companies (Amazon, Google, Microsoft) are investing in, and subsidizing, these data warehousing solutions. It’s a crucial component to nearly every business, which naturally draws the attention of the tech titans.

Big Data landscape diagram It’s a data jungle out there. Diagram from 2014 by Matt Turck at FirstMark

Phew, now you can enjoy the freedom of querying structured data, and the work (and fun!) begins. We’ll have more on the data analysis step soon!

We’d love to hear how you’re tackling these problems. What are your favorite tools? Where are your most painful pains? Tweet at @WagonHQ or send us a note at hello@wagonhq.com!

Field trip to Arion Press
August 03, 2015 | Jeff Weinstein

Software can feel ephemeral: changing technologies, new platforms, and shifting demands. So, for our recent team outing, we wanted to learn about work that is timeless. We found Arion Press, a handmade book publishing company tucked away in the Presidio, situated between our other planned Outer Richmond activities: slurping soup dumplings and exploring the beaches south of the Golden Gate Bridge.

Arion Press and M & H

Arion Press, along with M & H Type, is the oldest and largest type foundry the country: Casting hot type daily since 1915! They’re true craftspeople: printers, typesetters, bookbinders, and publishers continuing the tradition of making books from scratch. They combine fine art, typography and prose; frequently pairing commissioned artwork from contemporary artists with great works of literature. They publish two or three high quality books each year.

Much like the puppet show that is software, producing books is (one) part technical mastery and (ten) parts hard work, checklists, patience, and teamwork. They have their own foundry where they manufacture individual types: think Times New Roman, 10 point, lowercase, italic, letter ‘a’ as one specific thing to be made by hand. They compose the prose by combining these individually made letters, molding them into larger lines and eventually pages. They then pass handmade paper through a machine about the size of two cars that requires three people to operate. At high speed, the machine dips the large metal page stamps into ink, presses it on to the paper, and flings it into a pile. It doesn’t seem like it should work but seems to have better performance and reliability than any office printer we’ve jammed.

Wagon field trip! #wagonhq

A video posted by Jeff Weinstein (@jeff_weinstein) on

After the pages are printed, dried, and have passed the close eye of inspection (they have to occasionally remove ink splotches with a knife), the pages are ready to be bound. The bookbinders taught us how everything comes together to go from the raw materials to a beautiful book.

Thanks to their whole team for having us. Read more about the history of Arion Press and maybe give them a visit during their Thursday tours (book ahead). Feeling inspired by the aesthetic of hand set type, here’s a quick version of our beloved Wagon that we hope to see inked one day:

Wagon's old time-y logo

We’re on the look out for other great San Francisco afternoon trips. If you’ve been to any other off the beaten path spots, let us know!

Team Wagon learns about creating custom types.
Very large printer.
Baker beach

Wagon’s recent field trip. Not pictured: lots of Chinese food!

Electron at Wagon
July 15, 2015 | Mark Daly

While building Wagon, we’ve encountered a few engineering challenges that aren’t easily solved in the browser. Our users want Wagon to connect securely to their database and analyze large amounts of data, while being easy to setup and always up to date. Unfortunately, browsers can’t connect directly to databases and aren’t optimized for processing millions of data results. The standard web browser won’t suffice, so what should we do?

As many companies are discovering, mixing web and native technologies is very compelling. Early adopters including Slack, Atom, Quip, Visual Studio, Spotify, MapBox, Front, and Nylas have found ways to weave these once unrelated approaches.

At Wagon, we’re using Github’s Electron (previously “Atom-Shell”) as our underlying app framework. Electron was carved out of the Atom editor project and lets us deploy web UIs to the desktop. Our CTO Mike described Wagon’s technical architecture: a Javascript application for user experience along with a native process for database connections and streaming data computation. We want a capable desktop application with the ease of developing for the web.


The earliest alpha of Wagon was a command line program that powered a browser app available at localhost. We soon shipped our first Mac app: a download-able, double-click-able version using MacGap. As more people used Wagon, we wanted to update the code silently in the background, deploy to Windows and Linux, and move away from WebKit (we <3 Chromium’s dev tools!). We frequently ship new versions of both the JS and compiled Haskell, and it was painful for our users to manually download and reinstall the application. We briefly considered Mac-specific update mechanisms but it became clear we needed to replace MacGap. In searching for a full-featured, cross-platform web-view container, we found Electron.

Wagon's Electron architecture diagram
Wagon uses Electron to bundle native and web technologies

Electron is based on Chromium, runs on multiple platforms, and comes packaged with useful features like desktop notifications, custom keyboard shortcuts, and native menus. Auto-update, which originally motivated us to try Electron, is easy to set up and simplifies shipping new versions. Electron is evolving quickly and openly, supported by its vibrant community, active development, public Slack room, and commercial backing.

Migrating from MacGap to Electron was straightforward, as Electron’s excellent documentation has instructions on rebranding, packaging, and distribution–we got a prototype up and running on a Friday afternoon. Electron differs from other web-view containers by using Node.js (via io.js) as its entry point: when an Electron app starts, execution begins in a JS program included in the app bundle, which can open windows and interact with the host OS. Node’s and Electron’s rich JS APIs made porting from MacGap easy, and we’ve added features that would otherwise only be possible in a native application (like custom menus and dialogs).

Our static assets are hosted on CloudFront, so we can update the UI without requiring users to redownload the whole application. However, it can take a few seconds for these assets to load and we want an immediate cue that Wagon is working. Here’s how Wagon ensures a smooth app launch experience:

  1. On app start, the main Node process runs our JS App Loader. It loads configuration files, starts background tasks, and opens a renderer to load the latest UI.
  2. The renderer immediately displays a splash.html page that is shipped with the Wagon.app bundle.
  3. The splash page uses Electron’s <webview> tag to load our remote assets and start the JS app in the background while the Wagon logo rolls (some say dances!) on screen.
  4. Once the UI is ready, it notifies the splash page via Electron’s IPC API.
  5. The splash page swaps its content for the UI, creating a seamless transition into the full application.

When we’re ready to roll out a new version of Wagon.app, we use an automated deployment approach built on GitHub and CircleCI. When we merge a pull request into the master or production branches of our repos, CircleCI automatically builds the application bundle. The components are dropped into the Electron app structure, code-signed, and uploaded to S3. CircleCI also updates a configuration file that our auto-updater API endpoint reads, which lets us notify running instances of Wagon that a new version is available. The update is automatically downloaded in the background, installed, and triggers a desktop notification for the user.

We believe that great software should run in the browser, on phones, and on the desktop. If building across these platforms sounds exciting, check out our open positions, and email us or tweet @WagonHQ. Gogogo!

Bayhac 2015
June 30, 2015 | Joe Nelson

Bayhac is back. The annual Bay Area Haskell conference and hackathon met last weekend for its fifth consecutive year, bringing us fascinating talks and strongly-typed bonhomie. It drew attendees from all over the country, even from faraway Oakland (I’m told to say, “Go Warriors!”).

Each year at Bayhac I am reminded how the Haskell community is thriving: more adoption, new libraries, robust tooling. It seems that the language is destined to fail at its goal of “avoiding success at all costs.” What struck me this year is the number of funded startups developing critical parts of their products in Haskell. They are springing up in San Francisco, some blocks away from the Wagon office, companies like Mirror, Front Row Education, Projector, IMVU, Alpha Heavy Industries, and Pingwell.

Bayhac picture
Another Bayhac picture
Clockwise from top left: Greg Weber, Phil Freeman, Conal Elliott, Dan Burton

The conference started out strong with two talks which interested me personally. Tikhon Jelvis demonstrated how lazy evaluation is fundamental to designing modular code, not just an incidental curiosity. He gave numerous examples of laziness solving problems (the video for his talk is available here).

Then Dan Burton took the stage and gave us a tantalizing vision of the world post cabal-install, a glimpse of Stack. It is the spiritual successor of stackage-cli. In addition to locking project dependencies at mutually compatible versions like its predecessor, Stack can install necessary binaries and fetch packages right from git. Building projects, arguably the biggest pain in Haskelland, is getting better.

That was just Friday night. The rest of the weekend had plenty in store including our very own Mike Craig sharing Wagon’s experience of using Haskell. This technology choice has served our small team well and his talk dives in to its strengths and weaknesses. The presentation showcases our product space, technical architecture, the libraries we lean on, and our deployment strategy. He also discusses Haskell’s learning curve (pros and cons) as well as its positive impact on recruiting. It’s one of the core reasons we’re able to frequently ship new features with confidence.

If a picture is worth a thousand words then a thirty minute video at thirty frames per second is worth fifty-four million words. Here they are:

Big thanks to Bayhac organizers, notably Maxwell Swadling who stepped up to bring everyone together, handling everything from food to soliciting talks to the conference web presence. We’re looking forward to next year’s conference. In the meantime, keep a look out Wagon and community hosted Haskell events, or better yet, join our team!

My First Two Weeks of Haskell at Wagon
June 11, 2015 | Joe Nelson

It’s no secret I’m into Haskell. Two years ago a friend introduced me to what I then affectionately called “moon language,” and it’s been a continuous experience of learning, cursing, and joy ever since.

Loving the language is easy; finding a team who shares this functional programming passion and uses it to build their core product is more rare. That’s what I found at Wagon and I’d like to share my experience of the exciting first two weeks as a new Wagoner.

Wagon loves Haskell

I’ve written Haskell professionally before, so that part wasn’t a change. What’s different is that my prior use of the language felt almost surreptitious. Building an API server here, writing a script there, whatever could fit comfortably within a more conventional consulting project. Now it’s different, everyone is onboard: we own our product destiny and technology choices.

For me it means being exposed to practices across the whole tech stack that were designed with Haskell in mind, not as an afterthought. From dockerizing Cabal to sharing and versioning packages between client and server code, to streaming statistical calculations over datasets with Conduit. There is a lot that goes into the company goal of creating a modern data collaboration tool.

Although we work together in person (in the Mission district of San Francisco where the walls are muralier, the coffee pour-overier and the bikes fixier) we do much of our code communication via pull requests. Everyone tries to have at least one other person review their work, which keeps us all in the know and leads to a lot of teaching. For instance, here is my teammate Mark suggesting an improvement to my coding style:

(bid, now) <- liftIO $ do
    b <- randomIO
    n <- getCurrentTime
    return (b, n)

Example pull request

Challenges go beyond mere coding style of course, and the team excels at debugging gnarly issues, including a memory leak caused by a useful yet tricky language feature called lazy evaluation. Our architecture is split between a shared server (Haskell), a client interface (Electron + React) and a client server (Haskell) for local data access. Nowadays it’s common for the browser component on the client to use hundreds of megabytes of memory while the Haskell component races along at twenty megabytes even when doing intensive streaming operations. This was not always the case however, and the team used the GHC profiler to identify and correct a memory leak in the streaming subsystem.

Memory profiling in GHC

GHC Memory Profiling

During the past two weeks I’ve been getting acquainted with more type-safe ways of doing things than I had previously tried. For instance I’ve previously worked with Hasql for database access (a good choice for low-level and performant PostgreSQL work) and now I’m learning Persistent, which uses Template Haskell to generate structured ways to access and migrate databases. The Wagon server includes a custom routing system with some handy combinators which has helped me think about patterns of improving code with types.

I’m excited to share more of what I learn in the coming weeks, both the coding and the cultural practices that help us do our best work, as well as things we’re learning to improve. (See also Engineering at Wagon post)

We’ll be at BayHac this weekend so make sure to check out our CTO Mike Craig’s talk Sunday, June 14 at 11am or find me. If you love this stuff the way we do, you should come spend a day with our team.

Querying CSVs in Wagon
June 08, 2015 | Andy Granowitz

People use Wagon to query data stored in databases, but sometimes they need to analyze a file that doesn’t (yet) live in a database. Our friend Steve Pike recently showcased this hack in our #bandwagon Slack channel. He used a nifty command line tool called csvsql to automatically create a Postgres table from a CSV file and have it ready for Wagon.

CSVs in Wagon

Here’s how to set it up on Mac:

  1. Install Postgres
  2. Install csvkit
  3. Load your CSV file into Postgres:

     csvsql --db postgres://localhost:5432/postgres --insert --tables mytable /myfile.csv
    • mytable is the name of the new table
    • postgres is the name of the database, normally available by default
  4. Open Wagon and connect to your database: (Don’t have access to Wagon? Sign up for early access!)
    • Nickname: csvsql
    • Type: Postgres
    • Hostname: localhost
    • Port: 5432
    • Database: postgres
    • User: [your computer’s username]
    • Password: [empty]

Woo! You can now write SQL against your CSV file using Wagon.

Try this out with your next CSV file or an example dataset of movie scenes filmed in San Francisco (source). Our team favorite Blue Jasmine isn’t number 1!

Wagon is a great way to analyze databases and now small text files. Thanks csvsql.

Need help with this quick hack? Email us at hello@wagonhq.com.

Weekly Roundup: React
May 27, 2015 | Mike Craig

Our hybrid native/web application’s frontend layer is built with Javascript, React, and Flux. In 2014, our friends at Facebook advised us to try React and it has since been a core part of our stack.

React logo

The React community is bustling and there are some great posts to help people start, organize, and scale frontend applications. These articles have helped us:

  1. Alex Lopatin details Sift Science’s migration from Backbone to React in Best practices for building large React applications.
  2. Alexander Early from Fluid maintains an in-depth set of Tips and Best Practices. The discussion in the comments is good too.
  3. Facebook’s Christopher Chedeau has many talks on React architectures but this review of how to scale CSS in a React stack guided our development of Wagon’s custom UI. Merci!
  4. Alex Schepanovski describes React from a jQuery perspective in Boiling React Down to a Few Lines in jQuery.
  5. Ryan Clark has a technical but still approachable Getting Started With React walkthrough. If you like that, read the sibling article Getting Started with Flux too.

We’ll be doing a deeper dive into Wagon’s React architecture, tooling, and how we see it changing in our next engineering blog post. If you like to hack on React, Flux, Javascript, and Electron, we’re hiring a frontend engineer in San Francisco. Say hi!