Software developer, entrepreneur. Currently exploring Blockchain, ML, and CV.

Challenges Facing Cryptocurrencies and How They're Being Solved

It's still the early, wild west, days in the world of cryptocurrencies. There's so much day-to-day drama that it's easy to lose sight of the overall direction in which things are moving.

I want to zoom out and take a look at some of the major risks and problems facing the industry and how they might be addressed in the near to medium term future.

1. Transactions are slow and expensive.

2. Too volatile to be a currency.

3. Governments will shut it down.

4. There are too many different coins.

5. It's bad for the environment.

6. It's not user friendly.

The Next Feature Fallacy

  • Believing that building another feature is what will make your product explode.
  • Time is better spent improving the features that already resonate.
  • Always be getting feedback from users: qualitative and quantitative.
  • It's easy to build features. It's hard to iterate against measurement. One is a lottery, the other discipline.
  • Growth is engineering.

-- Justin Kan, CEO at Atrium

Moore's Law

This is Moore's Law over the last hundred years.

I want you to notice two things from this curve. Number one, how smooth it is -- through good time and bad time, war time and peace time, recession, depression and boom time. This is the result of faster computers being used to build faster computers. It doesn't slow for any of our grand challenges. And also, even though it's plotted on a log curve on the left, it's curving upwards.

The rate at which the technology is getting faster is itself getting faster.

-- Peter Diamandis, Abundance

Moores Law Graph

2Date: November 16, 2015

That one time we crowdsourced the price of marijuana

The idea

A few years ago, my friend Cory and I launched a project, subtly named, to answer some curiosities we had about the real street value of marijuana.

After watching a National Geographic documentary on marijuana where they cited some interesting (but seemingly exorbitant) figures about the price of the plant as it travels across borders - facing different economic/legal statuses from state to state - we realized that nobody really knows the true street price as the flow of information is nearly none due to its black market status.

And so we decided to start an experiment of crowdsourcing this data by simply asking the consumers how much they paid. Initially, the idea seemed so stupid it would be more aptly named a highdea; who the hell would voluntarily submit data about what they paid for an illegal product?


The plan was to create the site with as little investment as possible - a single page with a web form for posting new submissions. We had one call-to-action: "We crowdsource the street value of marijuana from the most accurate source possible: you, the consumer. Help by anonymously submitting data on the latest transaction you've made." It looked something like this:

Initial version of the homepage

Once we had the site working, we launched it by posting on three online communities- Hacker News, and the 2 of the major Reddit forums focused on pot (/r/marijuana/ and /r/trees):

Our initial post on reddit

The posts picked up a ton of traction, rocketing to the front page of all three websites; people were definitely interested in the mission and its potential findings. Furthermore, with California's Proposition 19 just around the corner, the timing couldn't be better.

Most importantly though, users were submitting data. A lot of it! We displayed the submitted data in a simple table:

Showing the data in a table

Good data, bad data

As the rate at which people were submitting data began picking up - hundreds, then thousands - so too did the number of bogus price entries. Immediately, we manually removed them by sorting and deleting outliers directly from the database. We'd soon need a more scalable solution.

We decided on a simple outlier filter using standard deviations. The idea is that any data points too far (~2 standard deviations) from the average are disregarded from the data set.

Removing outliers with standard deviations

To calculate the standard deviation for our data set, the following formula is used:

Formula for calculation standard deviation

To implement this in PHP, we did the following:

// fetch the submission set

$submissions = Submissions::find(array(
    'country' => $country, 
    'region' => $region,
    'city' => $city

// calculate the sum and mean

$count = count($submissions);

$sum_prices = array_reduce($submissions, function($carry, $submission) {
    return $carry += $submission['price'];

$mean_price = $sum/$count;

// calculate the standard deviation

$sum_of_squared_differences = array_reduce($submissions, function($carry, $submission){
     $carry += pow($submissions[$i]['price'] - $mean_price, 2);

$std_deviation = sqrt(1/$num_samples * $sum_of_squared_differences);

// remove outliers

$filtered_submissions = array_filter($submissions, function($submission){
    return abs($submission['price'] - $mean_price) < 2 * $std_deviation;

This cleaned up the data a lot. With outliers no longer affecting the data, the numbers appeared much more accurate.

Mapping out the data

In just a few days we had data in all 50 states and 10 provinces of Canada. The site would also eventually collect enough data for Europe, Australia, and even city-level statistics.

The logical next step was to visualize all this data. We plotted the data points on top of a map using Google's API. Green pins for cheap, red for expensive.

Mapping the data using Google Maps API

Immediately, we notice some obvious trends:

For example, the price difference between Southern Ontario and New York - only a few hours drive - is over 200$ per ounce! Does this reveal some sort of arbitrage oppurtunity?

Adding social metrics

Our (somewhat obvious) hypothesis was that the regional prices increased based on the legal and social hostility towards the drug.

Although data for the legal status for pot in different regions could be found online, it didn't tell you much about how heavily it was enforced, and certainly nothing about the general public's social attitudes towards it.

So again, we found ourselves crowdsourcing this data. We decided to add 2 new metrics - "Social Acceptance" and "Law Enforcement". To avoid cluttering and taking away from the main goal of the site, we added this as a secondary poll on the landing page once the user had submitted a price.

Asking for social data

Blowing up in the press and opening up our data

"It's either anonymous, or an ingeniously devious DEA sting operation" - LA Weekly

We began receiving quite a bit of traffic, driven by coverage from many of the major news outlets including the front page of TIME, Forbes, FOX, ABC, CBS, etc. A beautiful, full-page, infographic also appeared in the September 2011 issue of WIRED magazine.

In the Sept. 2011 issue of WIRED magazine

A ton of requests also came from professors, researchers, students, hobbyists, etc. requesting access to the raw data for their studies or personal interests. Excited about the possibility of awesome projects built on top of ours, we began actively distributing raw database dumps, with plans to have an open API. A few highlights of how some people have made use of the data:

2Date: October 30, 2015

Reversing videos efficiently with AVFoundation

The problem

One of the features I wanted to add to my app GIF Grabber in the next version was the ability to set different loop types. In order to support "reverse" and "forward-reverse" style loops, we needed a way to reverse a video (AVAsset) in objective-c.

Note: GIFGrabber, despite its name, actually keeps all the recordings in MP4 video format until the user actually wants to save it as a GIF. Manipulating video and adding effects are much easier and more efficient than dealing with GIFs.

Example of forward (normal) looping GIF:

Forward-style loop

The same GIF with a reversed loop:

Forward-style loop

Again, with a forward-reverse (ping-ping) style loop:

Forward-style loop

Existing solutions

Most of the answers and tutorials I found online suggested using AVAssetImageGenerator to output frames as images and then compositing them back in reverse order into a video. Because of the way AVAssetImageGenerator works, there are some major drawbacks to this solution:

Since the reversed video was going to be concatenated with the original, any difference in quality or timing would be very noticable. We needed them to be exactly the same.

A more efficient solution

Since we deal with relatively short videos (30 seconds or less) we wanted to perform the procedure completely in-memory.

This can be achieved by:

  1. Use AVAssetReader to read in the video as an array of CMSampleBufferRef[] (this struct contains the raw pixel data along with timing info for each frame).
  2. Extract the image/pixel data for each frame and append it with the timing info of its mirror frame. (This step is neccessary because we can't just append the CMSampleBufferRef structs in reverse order because the timing info is embedded.
  3. Use AVAssetWriter to write it back out to a video file.

You can find the source code here. In the next section, we'll walk through some of the more complicated parts.

1 Read in the video samples

// Initialize the reader

AVAssetReader *reader = [[AVAssetReader alloc] initWithAsset:asset error:&error];
AVAssetTrack *videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] lastObject];

NSDictionary *readerOutputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                        [NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange], kCVPixelBufferPixelFormatTypeKey,
AVAssetReaderTrackOutput* readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:videoTrack
[reader addOutput:readerOutput];
[reader startReading];

First, we initialize the AVAssetReader object that will be used to read in the video as a series of samples (frames). We also configure the pixel format for the frame. You can read more about the different pixel format types here.

// Read in the samples

NSMutableArray *samples = [[NSMutableArray alloc] init];

CMSampleBufferRef sample;
while(sample = [readerOutput copyNextSampleBuffer]) {
    [samples addObject:(__bridge id)sample];

Next, we store the array of samples. Note that because CMSampleBufferRef is a native C type, we cast it to an objective-c type of id using __bridge.

2 Prepare the writer that will convert the frames back to video

// Initialize the writer

AVAssetWriter *writer = [[AVAssetWriter alloc] initWithURL:outputURL

This part is pretty straightforward, the AVAssetWriter object takes in an output path and the file-type of the output file.

NSDictionary *videoCompressionProps = [NSDictionary dictionaryWithObjectsAndKeys:
                                        @(videoTrack.estimatedDataRate), AVVideoAverageBitRateKey,

NSDictionary *writerOutputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                        AVVideoCodecH264, AVVideoCodecKey,
                                        [NSNumber numberWithInt:videoTrack.naturalSize.width], AVVideoWidthKey,
                                        [NSNumber numberWithInt:videoTrack.naturalSize.height], AVVideoHeightKey,
                                        videoCompressionProps, AVVideoCompressionPropertiesKey,

AVAssetWriterInput *writerInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo
                                                               sourceFormatHint:(__bridge CMFormatDescriptionRef)[videoTrack.formatDescriptions lastObject]];

[writerInput setExpectsMediaDataInRealTime:NO];

Next, we create the AVAssetWriterInput object that will feed the frames to the AVAssetWriter. The configuration will depend on your source video - here, we specify the codec, dimensions, and compression properties.

We set the expectsMediaDataInRealTime property to NO since we are not processing a live video stream and therefore the writer can take its time without dropping frames.

3 Reversing the frames and save to file

// Initialize an input adaptor so that we can append PixelBuffer

AVAssetWriterInputPixelBufferAdaptor *pixelBufferAdaptor = [[AVAssetWriterInputPixelBufferAdaptor alloc] initWithAssetWriterInput:writerInput

[writer addInput:writerInput];

[writer startWriting];
[writer startSessionAtSourceTime:CMSampleBufferGetPresentationTimeStamp((__bridge CMSampleBufferRef)samples[0])];

First, we create a AVAssetWriterInputPixelBufferAdaptor object that acts as an adaptor to the writer input. This will allow the input to read in the pixel buffer of each frame.

// Append the frames to the output.
// Notice we append the frames from the tail end, using the timing of the frames from the front.

for(NSInteger i = 0; i < samples.count; i++) {
    // Get the presentation time for the frame

    CMTime presentationTime = CMSampleBufferGetPresentationTimeStamp((__bridge CMSampleBufferRef)samples[i]);

    // take the image/pixel buffer from tail end of the array

    CVPixelBufferRef imageBufferRef = CMSampleBufferGetImageBuffer((__bridge CMSampleBufferRef)samples[samples.count - i - 1]);

    while (!writerInput.readyForMoreMediaData) {
        [NSThread sleepForTimeInterval:0.1];

    [pixelBufferAdaptor appendPixelBuffer:imageBufferRef

[writer finishWriting];

Note: The structure of each sample (CMSampleBufferRef) contains two key pieces of information. A pixel buffer (CVPixelBufferRef) describing the pixel data for the frame, and a presentation timestamp that describes when it is to be displayed.

And finally, we loop through all the frames to get the presentation timestamps and use the pixel buffer from it's mirror (count - i -1) frame. We pass this on to the pixelBufferAdaptor we created earlier which will feed it into the writer. We also make sure that the writerInput has finished processing before passing it the next frame.

And finally, we write the output to disk.

That's it! Your reversed video should be saved and accessible at the output path you specified when initializing the writer.

Download the source code

You can download the final source code here.

Extra-low brightness on iOS devices

Here's a useful trick for iOS to dim the brightness further than control center allows.

It's a pretty hidden feature and I'm not sure what Apple's intention for this setting was, but either way, it's quite useful for reading at night.

Unfortunately, it seems this won't help preserve battery life (it's a software filter and not a physical dimming of the backlight).

To enable:

  1. Go to Settings -> General -> Accessibility -> Zoom.
  2. Enable Zoom.
  3. Triple tap the screen with three fingers to bring up the Zoom configuration menu.
  4. Set the Zoom Region to "Full Screen Zoom".
  5. Set the Filter to "Low Light".
  6. Exit the menu and disable Zoom.
  7. Go to Settings -> General
  8. Scroll all the way down and set your Accessibility Shortcut to "Zoom".

Done! Now, when you triple click your home button, it should activate the Low Light filter.