I wanted a simple way for React component state to update pushState/history, and I found it in Leif Denby's simple HistoryJSMixin. One thing it was missing: being able to update the URL too (e.g. change "/foo" to "/foo?bar=baf") with that state.
I've added it with a simple two line change here (line 57 and 61):
Here's an example snippet of how to use it:
So, as you can see on line 10, you can pass a url key to the call now and have it propagate to the URL bar. I've been using it in places where React Router would be too heavy duty, and all you really want is to propagate search terms, or page number.
]]>
At times in the Posterous days we wanted to add new properties but a migration would be prohibitively expensive given a large corpus of data.
Here's a simple module you can include in your ActiveRecord objects to do just that — lazily generate potentially costly computation and save it to an object only when it is actually called.
Dragons in these hills — but sometimes you want that. We're going to steer clear of these kinds of hacks for Posthaven, but as a minor exercise in metaprogramming maybe this'll be useful in the future:
]]>
Greg Thornton created some scripts that make it super simple.
Assuming you have curl installed on you Mac, run this to setup IE virtual machines:
curl -s https://raw.github.com/xdissent/ievms/master/ievms.sh | bash
]]>Sometimes you've just got to write code like this:
text = "”“"
But ruby will freak out and errors out like this:
invalid multibyte char (US-ASCII)
Thanks to the magic of Stack Overflow, I've discovered you can put this at the top of your ruby file and things will work again!
#!/bin/env ruby
# encoding: utf-8
]]>
I am so impressed with this simple widget! Hats off to Alex Gorbatchev for making the Internet more awesome.
It's super well documented, and has a beautiful site with great demos and examples:
Expect to see it on a Posthaven post editor near you. I also donated today too.
]]>One of the least favorite things I hated to do was migrate schema. Luckily Rails 3 has this new semantic called 'store' -- as detailed here.
Store gives you a thin wrapper around serialize for the purpose of storing hashes in a single column. It's like a simple key/value store backed into your record when you don't care about being able to query that store outside the context of a single record.
You can then declare accessors to this store that are then accessible just like any other attribute of the model. This is very helpful for easily exposing store keys to a form or elsewhere that’s already built around just accessing attributes on the model.
Make sure that you declare the database column used for the serialized store as a text, so there’s plenty of room.
Examples:
class User < ActiveRecord::Base store :settings, accessors: [ :color, :homepage ] end u = User.new(color: 'black', homepage: '37signals.com') u.color # Accessor stored attribute u.settings[:country] = 'Denmark' # Any attribute, even if not specified with an accessor # Add additional accessors to an existing store through store_accessor class SuperUser < User store_accessor :settings, :privileges, :servants end
I love this because now I can just store extra info at any time just by altering my model code. No more waiting for the next schema migration and all the pain of that on a production server. Just drop it into a serialized hash that's persisted into a TEXT blob on your row, and you're all good. NICE.
Of course that doesn't solve the problem of needing to index something specific -- but you can always just add that in your next major schema revision anyway, and then fill in that column on an offline basis later.
Back in 2010, Adam D'angelo speculated that NoSQL was mainly a fad that would end with a relational database with more relaxed semantics. Well, this is at least 50% of the way there, at the app level.
Hell, this is actually how FriendFeed would store data in MySQL back in the day -- JSON blobs, and they'd just create new tables when they needed new indices. No schema changes EVER with that approach.
Little things like this make it difficult to impossible for me to consider using other web platforms than Rails -- other than years of my tooling about in it (and learning all kinds of lessons the hard way), things just keep getting better and better.
]]>
Things are much much much improved from before. Of particular note is how much cleaner the Oauth2 API's are.
OmniAuth recently reached version 1.0 and has a number of nice additions. In this episode we’ll take a look at a new strategy called OmniAuth Identity which allows users to create an account by supplying a user name and password instead of logging in through an external provider.
To make a long story short, Snow Leopard ships with key forwarding disabled by default and you will have to modify the file /etc/ssh_config to get it working.
Just change the lines
# Host *
# ForwardAgent nointo
Host *
ForwardAgent yesand you are good.
So my home directory is on my secondary volume, so it's actually at /Volumes/Secondary HD/Users/me -- unfortunately the RVM install script when you run "bash < <(curl -s https://rvm.beginrescueend.com/install/rvm)" will output this annoying error "usage: dirname path". It's maddening. I checked the chat logs and Google didn't come up with a fix.
Luckily I busted out my bash-fu to debug the RVM install script. Pro tip -- run bash -x scriptname.sh and it will give you the line number things fail on. I saw that it was failing on line 318. A little digging and I noticed that what was happening was since my directory name has a space, it was borking one of the commands.
So you came here for a fix right? Here's the simple fix.
According to RVM docs though, you can install as root (run sudo bash to get a root shell) and it will install to /usr/local -- so even though that's not the recommended way to do it for dev users, this seems to be the best way to get it to work if you have a space in your home directory. (As of 2013 this is how you do a "multi user" install, which works just fine.)
]]>
Google’s Font Directory and API for web fonts could have a transformative effect on how we read the web. The only problem is, Google has made it very difficult to download all of the actual font files.
Web designers must be free to experiment with these new fonts, to sketch, comp and get to know these typefaces in browser and non-browser applications. This is why I’m providing this archive.
You can use it in web pages, but until I found this, I couldn't design with it. Now I can. EPIC WIN.
I found the packaging still vaguely annoying since each ttf file was in a separate directory, so I ran this on the command line (cd to the untarballed directory first):
mkdir all; find . -iname '*.ttf' -exec cp \{\} ./all \;
Then I had all the ttf files in one directory at the root (./all) and I could drag and drop that into OS X fontbook.
This is the minimal list of links that I think you need to get up and running with node, express, and a fully functioning node site.
Node.js homepage
This is the beginning. Node.js actually runs your javascript file. It's the interpreter runtime.
Express.js
Express gives you Sinatra-like routing. It gives you the baseline to be able to run a functioning web app.
Mongoose ORM
Mongoose gives you pretty syntax for accessing MongoDB models. It can be your data layer.
Jade
The default HAML-like template format for express
NPM
it's a package manager like gem or CPAN for the node universe -- install, compile, update all from one command line.
NPM module listing
listings of useful NPM projects, super useful.
Tutorials
How to Node.org
Great up-to-date blog, thanks for the suggestion @DShankar!
Nodepad: An in-depth tutorial
How to write a full app, In 15 parts...
For Rubyists
Node JS for my tiny ruby brain
via technoweenie of github
Advanced Topics
socket.io - easy realtime long polling library
coffeescript - beautiful syntax for javascript -- you can write an entire node project in coffeescript. Worth +10,000 elegant hacker points.
Node Inspector - debugging for your server
nvm - node version manager, kind of like rvm but for node -- cleanly lets you switch between node.js interpreter versions
]]>One of the first systems our engineers built in AWS is called the Chaos Monkey. The Chaos Monkey’s job is to randomly kill instances and services within our architecture. If we aren’t constantly testing our ability to succeed despite failure, then it isn’t likely to work when it matters most – in the event of an unexpected outage.
On the face of it, it seems insane. Why would you intentionally kill parts of your site? Yet in practice, being able to handle failure consistently means you are never ever surprised.
Edit: My friend Vinny Magno writes:
Automobile assembly lines started doing the same thing (I think Ford was the first, though I might be wrong).
A single line can produce multiple models back to back. So for example, a Focus may be followed by an Explorer and then an Escape. At one point, the chassis and body are aligned and welded together. Obviously welding a Focus body to an Explorer chassis would be a problem, but the automation is so good that the defect rate was incredibly low. Since such a defect happened so infrequently, operators became complacent and failed to catch it the times that it did occur. To solve the problem, Ford purposefully upped the error rate to keep the operators sharp.
Turns out there's precedence!
I implemented my new scheme and running time went from 9 minutes to 24 SECONDS. I liked this approach so much I decided to generalize it as ActiveRecord::Base.transform. Sample usage:
# if users don't have names, give them a random one NAMES = ['Adam', 'Ethan', 'Patrick'] User.transform(:name, :conditions => 'name is null').each do |i| i.name = NAMES[rand * NAMES.length] end
Really interesting use of temp tables here.
Clickable links,
should work in all browsers.For pasting or typing,
requires full IDN support.Script Language http://مثال.إختبار http://مثال.إختبار Arabic Arabic http://例子.测试 http://例子.测试 Simplified Chinese Chinese http://例子.測試 http://例子.測試 Traditional Chinese Chinese http://παράδειγμα.δοκιμή http://παράδειγμα.δοκιμή Greek Greek http://उदाहरण.परीक्षा http://उदाहरण.परीक्षा Devanagari Hindi http://例え.テスト http://例え.テスト Kanji, Hiragana, Katakana Japanese http://실례.테스트 http://실례.테스트 Hangul Korean http://مثال.آزمایشی http://مثال.آزمایشی Perso-Arabic Persian http://пример.испытание http://пример.испытание Cyrillic Russian http://உதாரணம்.பரிட்சை http://உதாரணம்.பரிட்சை Tamil Tamil http://בײַשפּיל.טעסט http://בײַשפּיל.טעסט Hebrew Yiddish
I had no idea. Posterous support for these bad boys coming soon.
While America threw on its eating pants and combed the Thursday circulars for deals, John Gruber spent Thanksgiving preparing to unveil his regular expression for finding URLs in arbitrary text.
\b(([\w-]+://?|www[.])[^\s()]+(?:\([\w\d]+\)|([^[:punct:]\s]|/)))Pretty dense. Let’s be that guy and break it out,
/x
style (ignoring white space, with comments)\b #start with a word boundary ( # ( # [\w-]+://? #protocol://, second slash optional | #OR www[.] #a literal www. ) # [^\s()]+ #non-whitespace, parens or angle brackets #repeated any number of times (?: # (end game) \([\w\d]+\) #handles weird parenthesis in URLs (http://example.com/blah_blah_(wikipedia)) #won't handle this twice foo_(bar)_(and)_again | #OR ( # [^[:punct:]\s] #NOT a single punctuation or white space | #OR / #trailing slash ) # ) # ) #
Amazing writeup by Alan Storm on Gruber's autolinking regex.
Update: Gruber updated this with an even more awesome regex, with breakout explanation.
If you're a Twitter user, logged-in, and have Javascript, you'll be able to see my profile here:
However, Googlebot will recognize that as a URL in the new format, and will instead request this URL:
Sensibly, Twitter want to maintain backward compatibility (and not have their indexed URLs look like junk) so they 301 redirect that URL to:
(And if you're a logged-in Twitter user, that last URL will actually redirect you back to the first one.)
seomoz.org, dependable as usual.
The dreaded problem that most web developers come across is the z-index issue with flash elements. When the
wmode
param is not set, or is set towindow
, flash elements will always be on top of your DOM content. No matter what kind of z-index voodoo you attempt, your content will never break through the flash. This is because flash, when in window mode, is actually rendered on a layer above all web content.There is a lot of chatter about this issue, and the simple solution is to specify the
wmode
parameter toopaque
ortransparent
. This works when you control and deliver the flash content yourself. However, this is not the case for flash ads....
So, to solve all this, I wrote some javascript that will dynamically add the correct
wmode
parameter. I call it FlashHeed. You can get it now on the GitHub repo.It works reliably in all major browsers, and has no dependencies, so feel free to drop it into your Prototype or jQuery dependent website.
Facebook gave a MySQL Tech Talk where they talked about many things MySQL, but one of the more subtle and interesting points was their focus on controlling the variance of request response times and not just worrying about maximizing queries per second.
But first the scalability porn. Facebook's OLTP performance numbers were as usual, quite dramatic:
- Query response times: 4ms reads, 5ms writes.
- Rows read per second: 450M peak
- Network bytes per second: 38GB peak
- Queries per second: 13M peak
- Rows changed per second: 3.5M peak
- InnoDB disk ops per second: 5.2M peak
This looks to be a must-see tech talk for anyone on the scaling / infrastructure side of the web.
People have chimed in and talked about the Foursquare outage. The nice part about these discussions is that they’re focusing on the technical problems with the current set up and Foursquare. They’re picking it apart and looking at what is right, what went wrong, and what needs to be done differently in MongoDB to prevent problems like this in the future.
Let’s play a “what if” game. What if Foursquare wasn’t using MongoDB? What if they were using something else?
Riak? Cassandra? HBase? MySQL?
The world has been made a better place via this fine javascript library. Timezones have always been a bit of a tricky thing to handle.
The author says:
You can’t have a named time zone in javascript (example : eastern time or central time), you can only have a time zone offset which is represented by universal time (UTC) minus the distance in minutes to it by callingdateVariable.getTimezoneOffset(). It means that if the time zone offset is -1 hours of UTC, javascript will give you 60. Why is it inverted in javascript? I have no idea.
In winter, it’s always standard time. In summer, it’s daylight saving time which is standard time minus 60 minutes… but not for every country. Plus, summer and winter are inverted in the southern hemisphere. That’s a lot of exceptions and that’s why I created the jsKata.timezone library.
Find it on github.
]]>
With Textmate plugins too! Nice. Sometimes you just want a simple way to validate some js. This will do nicely.
Just in case you ever have to do it again, here's a way to do it using macports...
sudo port install mysql5 +server
sudo port install mysql5-server
sudo env ARCHFLAGS="-arch x86_64" gem install mysql --no-rdoc --no-ri -- --with-mysql-config=/opt/local/bin/mysql-config5
# setup mysql.sock
sudo ln -s /opt/local/var/run/mysql5/mysqld.sock /tmp/mysql.sock
The compiler cannot know how you expect the program to behave, so you must "extend" the compiler by adding unit tests (regardless of the language you're using). If you do this, you can make sweeping changes (refactoring code or modifying design) in a rapid manner because you know that your suite of tests will back you up, and immediately fail if there's a problem — just like a compilation fails when there's a syntax problem.
But without a full set of unit tests (at the very least), you can't guarantee the correctness of a program. To claim that the strong, static type checking constraints in C++, Java, or C# will prevent you from writing broken programs is clearly an illusion (you know this from personal experience). In fact, what we need is
Strong testing, not strong typing.So this, I assert, is an aspect of why Python works. C++ tests happen at compile time (with a few minor special cases). Some Java tests happen at compile time (syntax checking), and some happen at run time (array-bounds checking, for example). Most Python tests happen at runtime rather than at compile time, but they do happen, and that's the important thing (not when). And because I can get a Python program up and running in far less time than it takes you to write the equivalent C++/Java/C# program, I can start running the real tests sooner: unit tests, tests of my hypothesis, tests of alternate approaches, etc.
av = ActionView::Base.new(Rails::Configuration.new.view_path)
av.assigns[:foo] = "bar"
av.render "some/template/here"
Yes I know this is evil. =D
Some really practical tips here. Adam Singer is leading up this effort for using Varnish at Posterous. Definitely some cool infrastructure at work.
interrupted = false trap("INT") { interrupted = true } loop do # Do a lot of incredible things … ;-) if interrupted exit end end
This is pretty handy for ad hoc benchmarking of pings, network issues, and other things.
Sit in a loop and just wait til I feel like I got enough data....