Listening to Numbers

October 31, 2012 at 11:19 pm (art, computers, interesting, internet, music) (, , , , , , , , , )

I get Make Magazine and from time to time find something that peaks my (software related) curiosity.  This time it was an article about making synthesized music from data using the algorithms from Dr Jonathan Middleton’s Music Algorithms website – http://musicalgorithms.ewu.edu/

Basically this takes a sequence of numbers, scales it to a pitch range you select, gives you options for translating pitch – e.g. scale backwards, replace specific notes with another note, use division or modulo arithmetic, etc – and then gives you options for applying a duration to each note – either a fixed duration or using a scaling formula.

Finally you have the option to play it, download it as a MIDI file or see it in a crude representation of notation.

There are a number of ‘preset’ options to get you going – I experimented listening to pi, the Fibonacci sequence and their ‘chaos algorithm‘ using ranges of 0 to 88 (a full piano range) and 40 to 52 (basically an octave starting from middle C).  I tended to use a fixed duration of 0 or 1 as it went by suitably quickly and kept things interesting.

Then I thought I’d try something a little different.  Using the option to ‘import your own sequence’ I took a wander over to Google Trends.  This plots the frequency of people searching for specific terms over time.  If you login with your Google account you can download the results as a CSV and then its trivial to open it in a spreadsheet, select the column of results and paste it into the Music Algorithms form and listen to what something sounds like.

For my own entertainment, I had a listen to the following:

  • Default ‘swine flu‘ search that Google Trends offers.  This works well scaled 0 to 88, as the pitch then mirrors the graph quite well.  I didn’t paste in all the zeros, just the portion with the shape and got a nice quickly peaking and decaying piece.
  • Facebook is a good one … it goes from continuous low through a slowly rising scale, increasing in pitch and frequency of change as time moves on, finally tinkling along in the high register as search frequency fluctuates.  This would be a really interesting one to do with number of users, scaling from Mark Zuckerberg as #1 up to user 1 billion …
  • Considering the date, Halloween was an interesting one – you get a random sounding very quickly rising and falling scale and then silence … the ration of silence to scale is around 1 in 12 funnily enough and the pattern repeats 8 times (for 2004 to the present day) … this works well with a duration of 0 across the full piano range – nice and quick.
  • The text ‘music algorithms‘ generated a curious pattern – reasonably random around a specific value, but that value has slowly decayed over time.
  • Then I tried a whole range of whatever came into my head looking for an interesting graph – seeing fluctuating searches, lots of rising trends – then finally settled on Tim Berners-Lee.  Not sure why!  But that gives a nice, angry sounding (especially on duration zero) left-hand piano line for the majority of the data set, generally getting slightly lower, adding to the angry nature, until there is a quick high flourish representing him appearing in the Olympics opening ceremony!

I only played the MIDI files back using the standard instrument, i.e. a basic piano sound. It would be really interesting to actually use some of these data sets to define a synthesized timbre too.  Could be the start of a very interesting musical piece.

What would be really interesting is to hook it up live to some Google or other Internet stats and then allow you to hear what is going on, say on Twitter.  A bit like a musical version of The Listening Post.  Maybe that could be a job for my Raspberry Pi

Kevin.

 

Advertisements

Permalink Leave a Comment

Why was Usenet replaced by Web Forums?

October 9, 2012 at 8:47 pm (computers, internet) (, , , , , , , )

I’ve never really liked web forums – the idea of having to go back to visit a web site to see what is new, etc always seemed a backwards step to me compared to what came before.  I like the information to come to me, not me having to remember to go and check.  RSS came in and made it a bit more tolerable, at least I could subscribe to information again, but you still have to find the appropriate feed in the first place and add it.  I used to like browsing newsgroups on Usenet – ok, it was usually a different reader required, but it just, well, worked.

And for some reason I’ve only just realised why I really preferred Usenet to web forums.  And that is that all the newsgroups were in a single hierarchy.  I’ve had cause recently to start looking up a few technical issues and do the usual ‘resort to Google’ to sort them out.  But the kinds of things I’ve been looking at have involved a range of different forums.  There is avforums for TV and video issues, forums for Raspberry Pi use, forums for virtual worlds, forums for my cheap Android tablet, and so on.

And now if you want to create a space for discussion there are so many different options for ‘one offs’ too – Google and Yahoo! both do groups, Facebook, your own web forum hosting, etc, etc.

But how is one really expected to know about all this?  Back in Usenet days, you start at the top and work down the hierarchy – comp.something for computer related things, alt.something, news.something, etc.  In fact, its been so long since I thought about usenet, I can’t even remember which newsgroups I used to follow and what the full hierarchy actually looks like.  For a fascinating diversion – read about the Great Renaming – back in a time, i.e. 1987, when it was still possible for a community to contemplate a major change to a worldwide facility used by millions.

My ISP does still provide usenet access, but again I haven’t used it for years.

So we’ve ended up with a whole range of walled gardens – different silos of information buried in forums – and in some cases topics spread over many different forums, and so we resort to Google to find anything.  And we have to join a whole range of different forums before we can ask a question.

I really don’t quite see why web forums supplanted newsgroups. Well I guess I do – people got fed up with the flames and unmoderated spaces.  Usenet tended to be very Western in use – foreign language wasn’t easy.  Almost everything else has moved to the Universal Firewall Traversal Protocol (i.e. port 80).  And of course, Usenet didn’t suffer new users very well.  It tended to consist of very well established communities and I suppose the strong influx of a non-techie audience found that joining newer web forums satisfied their needs in a much more friendlier way (and had less intolerance and trolling).  Either way, the decline has been given a name – the Eternal September:

“Eternal September (also September that never ended) is the period beginning September 1993, a date from which it is believed by some that an endless influx of new users (newbies) has degraded standards of discourse and behavior on Usenet and the wider Internet.”

Many mailing lists have kind of gone the same way, being replaced with Facebook groups and web forums.  So now everything is distributed across the entire Internet.

Another view on the decline of Usenet:

“Segan said that the “eye candy” on the World Wide Web and the marketing funds spent by owners of websites convinced Internet users to use profit-making websites instead of Usenet servers. In addition, DejaNews and Google Groups made conversations searchable, and Segan said that this removed the obscurity of previously obscure Internet groups on Usenet.”

So – blame the marketing departments?  Probably actually, most likely driving traffic to forums to get earnings from online advertising revenue derived from ads in web forums I guess.

Still there is something about Usenet I miss.  But I’m not sure why I don’t subscribe to any newsgroups any more – I still could …

Kevin.

 

Permalink Leave a Comment

Dark, Unexposed Corners of the Internet

October 3, 2012 at 9:50 pm (computers, interesting, internet) (, , , , , , , )

I’m almost finished reading through ‘The Geek Atlas‘, which will probably be the subject of a post of its own at some point, but for various reasons I was led to the author’s website and blog.  There were two very interesting (to my geeky side) recent posts that I found fascinating to follow up.

The first, is a recording of a recent keynote speech given by the author on the issue of ‘big data‘.  This is a bit of an IT buzzword for some reason this year (a bit like ‘cloud’ last year) but the keynote is all about the fact that you can basically pick a point in time, and big data will always mean ‘more data than I can handle with the machinery I currently have at my disposal’.  It describes the issues faced by some engineers tasked with calculating the distances between stations in the British Rail network – they had 9 months to come up with an answer – and this was in 1955.  It is a fascinating talk – I recommend it.

The other one that caught my eye, relates to the recent announcement that the body that oversees allocation of Internet address for Europe is down to its last few (few in this case being approx 16m) and we are rapidly running out.  He noticed that there are various bits of UK government that appear to be sitting on major chunks of unused address space.

Now, working in IT, I know what a major pain and effort it will be to free-up any of these already allocated addresses, so wasn’t really expecting the government to suddenly experience a £500m-£1.5bn windfall from this.  I also know that the first major call to be a ‘good Internet citizen’ and return unused addresses was actually made in 1996 (in the shape of RFC 1917):

“This document is an appeal to the Internet community to return unused address space, i.e. any block of consecutive IP prefixes, to the Internet Assigned Numbers Authority (IANA) or any of the delegated registries, for reapportionment.”

So, over 15 years later anyone easy returns would probably have happened by now.  However what has been interesting in this recent case is seeing geeky, interested, members of the public using Freedom of Information as a means to prod said gov departments to find out what these are used for.

First – the UK MOD has 25.0.0.0 – the response:

“I can confirm that the IPv4 address block about which you enquire is assigned to and owned by the MOD; however, I should point out that within this block, none of the addresses or address ranges are in use on the public internet for departmental IT, communications or other functions.  To date, we estimate that around 60% of the IPv4 address block has been allocated for internal use. As I am sure you will appreciate, the volume and complexity of the Information Systems used by the Armed Forces supporting military operations and for training continues to develop and grow.    We are aware that the allocation of  IPv4 addresses are becoming exhausted, and the issue has been recognised  within the Department as a potential future IS risk.”

Then the UK DWP – 50.0.0.0 – the response:

“DWP have no plans to release any of the address space for use on the public Internet. The cost and complexity of re-addressing the existing government estate is too high to make this a viable proposition. DWP are aware that the worldwide IPv4 address space is almost exhausted, but knows that in the short to medium term there are mechanisms available to ISPs that will allow continued expansion of the Internet, and believes that in the long term a transition to IPv6 will resolve address exhaustion. Note that even if DWP were able to release their address space, this would only delay IPv4 address exhaustion by a number of months.”

So no – too expensive to release them, and as stated above, it only prolongs the agony very slightly anyway.  However, I do wonder how many other corners of the Internet are actually ‘dark’ like this and will never actually be connected.

Maybe we will do better with IPv6 allocations – even a home user will get an allocation that is larger than the current Internet, but the authors of RFC 3177 make the argument that it is fully justified (especially as they have room for around 35 trillion of these):

“… based on experience with IPv4 and several other address spaces, and on extremely ambitious scaling goals for the Internet amounting to an 80 bit address space *per person*.  Even so, being acutely aware of the history of under-estimating demand, the IETF has reserved more than 85% of the address space (i.e., the bulk of the space not under the 001 Global Unicast Address prefix).  Therefore, if the analysis does one day turn out to be wrong, our successors will still have the option of imposing much more restrictive allocation policies on the remaining 85%.”

So there is quite a large margin for error, even compared to the decision back in the 1970s to allow for 4 bn addresses for the current Internet at a time when there were only a handful of computers to be connected.

As I say an interesting interplay about a topical, if geeky, infrastructure issue.

BTW – you can see both blocks in the top left hand quarter of the xkcd Internet map (labelled 25 UK MoD and 51 UK Social Security).

Kevin.

Permalink Leave a Comment