Archive: January 2023

<<< December 2022

Home

February 2023 >>>


Happy New Year!

Sunday,  01/01/23  10:41 AM

Happy New Year's!  Woo hoo, we made it out of 2022 and on into 2023!

It rained super hard last night, and just before midnight I captured this photo of the magnificent old oak tree in our backyard.  It suddenly struck me - this tree is not only older than the whole neighborhood surrounding it, not only older than me, but older than anyone alive, much much older, in fact older than the United States, older than California had been settled by Europeans.  Older than almost all of what we consider "history".  And yet there is stands, in the rain, hanging out, waiting patiently for the next trip around the sun, and the next, and the next.  Supercool.

So ... what did you do last night?  We had a nice dinner - earlyish - then came home and hung out 'till midnight, not too exciting, but nice.  It was great that New Year's Eve in Times Square featured a band I actually knew and liked (Duran Duran).

Today there was a lot of football to watch - I used to be a bigger fan, now I'm mostly just a College Bowls and Pro Playoffs watcher - tomorrow no bowls (yeah, it's Sunday, and while this tradition of moving the New Year's Day bowls to Monday when New Year's falls on a Sunday began for religious reasons, now its so that college and pro games don't overlap - and then Monday, the Rose Parade and the New Years' stuff.  Lots of watching and eating coming up.

Also great - Shirley made Oliebollen, a kind of Dutch donut traditionally made on New Year's Eve - yes they are as good as they look, and no calories of course.

And so we move on into 2023 ... what will it bring?  Stay tuned!

 

 

 

checking in after 20 years

Monday,  01/02/23  02:29 PM


keeping track ... in the archive ... check it out 
Checking in ... welcome to blogging in 2023. 

I began blogging on Jan 1, 2003, so it's now been 20 years!  Wow.  Not continuously - as a quick peek at the archive will reveal, there have been gaps - but definitely more on than off.  During that time I've made 3,406 posts containing 10,771 images.  There are 24,965 links, of which only 912 are back to other content on this blog.  (It would be interesting to determine how many of those links have died ... should be possible to do ... made slightly harder by the fact that some sites don't return a 404 when content is missing.)

At various times I've paid more or less attention to traffic; for some time now I have used Google Analytics, which is pretty good; it tells me I currently get about 500 unique visitors a day, who conduct 600 sessions, which average 1:32.  So be it.  This doesn't measure RSS and I actually think, based on the survey I just ran, that I get way more inbound from there than from search engines.  I also get linked back from Twitter, where I post a link back to every new item; no stats on that, but I should dig deeper.

Hitting a new year means I have to roll over all those "this date in" links at the top of the sidebar - who knew I would someday have 20 of them! - and revisiting old stuff remains one of the most fun things about blogging.  My flight feature gets a lot of use, at least from me (show me what I posted this day every year).

It's a little fun to remember all the stuff I used to do: frames (yikes), blogrolls (we hardly knew ya - but OPML lives on), blog roulette (pick two blogs from the blogroll and feature them), and lots of under-the-covers performance optimization from when I hosted this blog myself (it lives at AWS now).  Just recently I got rid of another complication - I used to serve a skinnied down version of each page to robots (now they get what everyone gets).


And so onward in 2023 ... let's see what happens. 
(Anyone care to guess whether I'll be blogging in 2043?  Stay tuned!)

 

first pass

Monday,  01/02/23  03:01 PM

It's the first filter pass of the new year ... lots going on.  Many (many!) year-in-review articles out there, and many (many!) what's-going-to-happen-next articles and posts, too.  The big trends I see are 1) crypto is over, the pyramid is finally collapsing, and 2) tech investment has retrenched, we seem back to a more normal situation where good companies can raise money but bad companies (and non-companies) cannot.

Mashable: The deep sea discoveries and sightings of 2022.  "You're always finding things you haven't seen before.

The Federalist: 10 wins in 2022.  Many more losses, though... 

Nonzero News: Tweet of the year.  Sadly, a good call. 

Ars Technica: 10 best cars we drove in 2022.  Eight of the ten are EVs, zero of them are Teslas, #1 was a Kia.  Who would have thought? 


A recurring theme in 2022 has been the governments' reactions to Covid.  David Sacks retweets Elon Musk regarding Anthony Fauci.  To me it seems likely that the Wuhan flu virus was synthetic, and accidentally escaped the lab where it was created for "gain of function" research.  Sounds like a movie, and it stopped the whole world for a while. 

Here's an interesting Tweet from Joscha Bach.  I'm a huge fan of Wikipedia and never would have known this.  Did you? 

Taxprof Blog: The Myth of American Income Inequality.  "Real income of the bottom quintile, the authors write, grew more than 681% from 1967 to 2017. The percentage of people living in poverty fell from 32% in 1947 to 15% in 1967 to only 1.1% in 2017."  Economic growth has been a rising tide; all boats aren't lifted equally, but all boats *are* lifted.  Definitely conflicts with the prevailing narrative, huh? 


2022 was the year of many things, among them, SpaceX launches became commonplace, as well as their successful landings and subsequent reuse of boosters.  Teslarati note their 61st launch of 2022 tied a 42-year-old record set by the USSR. 

Ottmar Liebert: 2022.  I link because he links the year in cheer, 183 ways the world got better, and New Atlas' best photos of 2022

Panda's Thumb: Breakthroughs of 2022. "I consider a miracle to be something that I understand in some detail and know it is impossible, yet there it is in my hand for $200 – a digital camera, for example."  He links Science: Breakthrough of the year ... the James Webb telescope: 


BusinessWeek: How not to play the game.  Yeah, the FTX collapse, etc., but there's a bigger picture, as this article points out.  "What makes this problem so hard in a crypto financial system is that there are no economic fundamentals."  I'm not sure there's an actual problem, there's just no there there. 

Liron Shapira: So long crypto, thanks for the memories.  This Tweet has been viewed 456K times.  You should too, click thought and watch the video.  It's dead on. 

Miguel de Icaza: the truly amazing part about crypto is how they scaled scamming.  "340,000 people in this last round trusted their money to good looking web sites.

As a known "techie" my friends often ask me about crypto, and I've always said I don't get it.  By which I mean, I know how blockchains work, but I don't understand how ICOs and NFTs and DAOs create value.  And I still don't. 

Brad Feld: What just happened.  "If you're a fan of Harry Potter, think of 2022 as the sorting ceremony."  He thinks 2023 will be like 2000.  For crypto, it already is. 

Finally, from Marc Andreessen: The more things change... 

Definitely worth keeping in mind as we advance into the new year.  Some things like crypto are passing fads, others, like AI, are new things which genuinely create new value. 

Ands RealClearPolitics note: Chase the American Dream in 2023.  "People across the United States share a positive and hopeful narrative about the American Dream. In 2022, most Americans said they either have achieved or are on their way to achieving the American Dream. Only a small minority, 18%, said it is out of reach. Notably, this trend was consistent across age groups, race, education, and income."

 

 

rain!

Saturday,  01/07/23  11:58 AM

Rain!  Yep, lot's of rain.  Lot's of storms.  Lot's of weather.  Yikes...

Reading my flight of posts from years past, I guess yeah we do get winter in California and yeah it does often rain right after New Years'.  The pic at right is downtown LA, post storm.  Beautiful huh?  You often see those mountains and don't think of them, but you don't often see them covered with snow.

I was supposed to go sailing tomorrow - first race of the season, the Hot Rum - but Santa Barbara harbor is silted closed and the yacht club itself is surrounded by water.  It sits on stilts on the beach, and has done so for a long time, so I think the building will be okay, but the parking lot, beach, boatyard, etc. will need some recovery time.  Wow.

 

Zwifting

Saturday,  01/07/23  12:03 PM

So it's crummy weather and being a candy-ass Californian (instead of a hardy Dutchman!) I have resorted to Zwifting instead of riding outside.

The pic at left shows my setup; a Wahoo Kickr "smart trainer" that holds my bike, measures my power output, and simulates riding conditions like climbing hills and wind, a cadence sensor that goes in my sock, and a heart rate monitor.  I can use my Renovo road bike "as is" in this setup, easy in, easy out when I want to do a "real" ride outside.

Zwift has a bunch of riding terrain on their made-up island of Watopia, but also recently have digitized some real places to ride like France, Italy, and Innsbruck, Austria.  The overall experience isn't exactly like riding outside, but it's not bad.  Especially when exactly like outside would mean riding in a storm.  And the best part about Zwift is there are other people - just like me - out there riding, and we're all together.  Kind of fun to pass people from all over the world (well, and also getting passed by them :)

Zwift nerd note - if you look closely at this picture, you can see I'm wearing a KOM jersey.  Don't ask!

 

wet filter

Saturday,  01/07/23  12:15 PM

Making a filter pass on a rather soggy day ... we'll see how it goes.

I recently discovered Dutch Bros coffee!  How did I not know about them? 

SpaceX (and Elon Musk) celebrated their 61st launch of 2022, and their 61st successful one too.  Wow, just wow.  Remember when they were working on launch 1? 

Putting in a marker here ... over time, SpaceX might well become one of the largest and most successful companies of all time.

Not everyone is excited about space, Maciej writes why not Mars, in which he attempts to answer that question.  (via John Gruber)  And to be clear, his objection is to manned travel, which is admittedly not a bad point.  But if we abandon manned space travel, what's next for us?  (And how will I ever get to Titan?) 

An interesting reminder: our first image of Mars was a paint-by-numbers pastel drawing.  My father worked at JPL at the time of Mariner 4, I remember it well! 

I've been working on a cool project which involves using NVidia's CUDA library to accelerate processing by offloading parallel tasks to a GPU.  (More on this to come...)  GPUs are being used for this all over the place, especially in connection with AI tasks; computing a neural network is a perfect parallel operation.  (Essentially what are brains are doing all the time.)  But despite this, Slashdot reports desktop GPU sales hit 20-year low

Related: Portland startup to mine artisanal bitcoin using only slide rules and graph paperExcellent! 

It's the blue people!  Avatar: The Way of Water surpasses $1B mark globally.  Not surprising to me - it was a good movie.  And a good reason to go to a theater, with the spectacle and 3D and everything.  This is not a movie where you say "oh I'll wait for it to be on Netflix". 

Speaking of Netflix, "Emily in Paris": why it's so hard to admit love for the show.  Salon's tagline: "at best is something of an embarrassment, at worst living embodiment of cultural imperialism."  Heh.  I'm not sure I love it, but I've liked it a lot.  It's entertaining, and that's all it has to be. 

 

Football

Sunday,  01/08/23  12:46 PM

This weekend I [re]discovered a latent interest in Football.  I honestly haven't watched much since the pandemic, missed the bowls and playoffs at the end of 2020, when they were moving around and being rescheduled and cancelled and everything, and didn't pay any attention last year, for ... reasons.  (And to be fair, spent January sailing in Florida - a story to be told here at some point - so I was otherwise occupied.)

Anyway, yeah, my DVR has tens if not hundreds of college bowls queued up, all the way through the big ones last weekend, and now pro games too, just in time for the playoffs.  It's a good combination with Zwifting; you can ride and watch at the same time, both activities only require fractional brainpower.

One big changed I haven't blogged about here yet is that during the pandemic we moved - not far, still in Westlake Village, CA - and in the process have now fully embraced AT&T DirectTV streaming as our "cable", and shed our old Tivos.  So we now have a cloud-based DVR.  I always loved Tivo - was a super early adopter, as you know - but I have to admit this is nice.  Including being able to easily watch on my computer.

[grumpy sidenote: HEIC, WTF? - did we need a new image format?  No, we did not...  and why oh why did Apple decide to make it the default on their phones?]

 

crypto grump

Sunday,  01/08/23  01:02 PM

Hi, I'm Ole, and I'm a crypto grump.  I know how it works - better than most people - and I understand blockchains and cryptography and all the tech, and I appreciate the potential for having a digital currency used as a value exchange intermediary which isn't backed by physical assets like gold.

But.  The crypto fad of the last few years doesn't have anything to do with any of that.  It's a distributed pyramid scheme, and as an official crypto grump I do not see any value being created.  Tell me all about ICOs and NFTs and DAOs - go ahead.  Extol Web3 to me.  No. Value. Created.  Watch the video linked at right - it's excellent - and see if you don't agree.

So I'm watching the Rose Parade and in the middle of all the floats from big companies and cities and charities, there's a float from nouns.wtf*.  Maybe you've heard of them?  If not, let me tell you - it's a beautiful example of the whole scheme in action.

You might think a noun is a part of speech, but also, it's a 32x32 pixel caricature of a little person, formed from a random combination of features like head shape, glasses, body, etc.  At right is Noun #563, which was "minted" on Jan 1, 2023.  Every day a new Noun is born and auctioned off, and every day someone wins the auction and buys one.  This particular Noun was bought for 35.0 ethereum (ETH), which on Jan 1, 2023 was worth about $47K.  Huh. 

So what did that person get for their money?  The rights to the image, amazing and beautiful as it is?  Well no, Nouns are open and available to everyone.  You could use it yourself, right now, for $0.  All they got was an entry in the ethereum blockchain, which they could potentially sell to someone else.  So far about 570 Nouns have been created, and 126 transactions have taken place after the initial auction, so it would be hard to say that they are in demand.  But every day there's an auction for a new one, and people bid, and someone wins.  No. Value. Created.

What about that parade float?  Well if you auction off 570 Nouns at $47K or so each, you can pay for one.  Just now the Noun treasury contains 28,580 ETH, worth $38M.

My favorite part of this "project" is that there are ten "nounders", and every tenth Noun goes to them.  They are paying themselves each about $5K every ten days.  And they're proud of it ...  Such a deal!  It's good to be living on the tip of the pyramid.

There are hundreds of such scams operating in parallel.  Everyone is buying tokens for X today hoping they'll be worth more than X tomorrow.  And they will be for a while.  Just get out fast, because the whole thing is unsustainable.  No. Value. Created.

So, what about all the VCs who are investing in crypto?  Well VCs always live at the tip of the pyramid, this is perfect for them.  They are smart people, they are not confused.  They will get paid in tokens, they will sell their tokens, and they will get out early, before the stuff hits the fan.  When the music stops it will be ordinary retail investors who end up owning those entries in the blockchain and desperately trying to find buyers.

As a final note, for some reason some people choose to compare AI to crypto, maybe because they're related in time.  But don't be confused.  As Miguel de Icaza notes: AI is hyped because it is useful and shows a lot of promise.  Crypto was hyped because scammers and swindlers found they could make a killing.  They are not the same thing.

* BTW yeah, there is a "wtf" TLD, who knew?  Perfect for crypto projects!

 

 

still wet

Sunday,  01/08/23  02:01 PM

Wow, still wet out there.  All over.  With more rain in the forecast.  I love rain, but not all at once, please.

So I posted about being a crypto grump, but I'm also an AI cheerleader.  If you've played at all with DALL-E or ChatGPT or any of these sorts of tools, you can immediately see the incredible value being created.  (BTW, I use AI as shorthand for AI/ML, hope you're okay with that.)  As Steven Levi notes in Wired, Welcome to the Wet Hot AI Chatbot Summer.  (He's drawing a parallel to the so-called AI Winter...) 

BTW the worst part of using the amazing OpenAI tools is having to go through their login captcha.  Every time.  Argh.

Ah, you got me started.  Not only are captchas bad, but passwords are bad.  (OpenAI requires both.)  Fortunately the Internet is kind of settling in to a "text message confirmation = password" alternative.  But another problem remains - the "login or create new account" dilemma.  I've created accounts on hundreds if not thousands of websites and of course I can't remember.  Make it easy for me, please; if I enter an email and password and you don't know me, ask me if I want to create an account.  How hard is that? 

Salon: An AI that can "write" is feeding delusions about how smart artificial intelligence really is.  We certainly have a chorus of professional writers disclaiming AI's ability to replace them.  I think we should ask ChatGPT to write an article, in the style of Salon, about how AI is not really intelligent.  Oh, wait...  [Later: someone did :)

As you know I'm a close follower of the Iditarod, and recently I got an email from them asking "who's ready for Iditarod 2023?"  Great question.  And, not me, yet.  If you've hung around here in the past you know I traditionally post about the Iditarod, and have even created a little race tracker to help follow the action.  Which I did do in 2020 - it was just before the pandemic became "the pandemic" - and did do in 2021 - a weird year with the Iditarod being one of the few sporting events still held - but which I did not do last year - I was sailing in Florida.  But this year I'd like to do again ... time to get ready! 

Excellent!  Boom supersonic unveils new Symphony engine for faster-than-sound Overture airliner.  Having lived through the whole SST era, I can't believe we don't have these yet... 

Henry Kissinger: how to avoid another war.  "End the Ukraine War by trading NATO admission for Ukraine in exchange for Crimea and Donbas staying with Russia."  Ah yes, diplomacy.  [via David Sacks, my old PayPal colleague who has rapidly become one of my favorite Tweeters] 

Along with crypto and AI, another present-day technology gathering headlines is AR/VR.  Like AI it's been around for a bit, but unlike AI it hasn't found any killer apps.  Yet.  I think the form factor is the problem - people don't like wearing things on their head.  Once we have reliable implants to inject video and audio directly into our brains, this problem will disappear :) but for the moment, it's a big hurdle.  There are other problem too; Wired has a report from a metaverse skeptic: I Threw a Holiday Party in Horizon Worlds. It Didn't Go Well

Another new technology gains traction: My Dystopian Ride In San Francisco's Fully Driverless Cars.  As a happy Tesla Model S owner with Enhanced Autopilot, I feel this is now only a matter of time.  There's a big difference between the car driving for you - while you are sitting there - and having no driver at all. 

John Battelle:The Next First Day.  "I'm resolved to come back to this patch of land and dig around. I don't expect anyone to notice, and that’s OK. I need to till the soil, clear the rocks and roots, and find out what might grow here."  Yeah, me too! 

 

ole@universeodon

Sunday,  01/08/23  02:21 PM 

When the neutrino was first detected, renowned physicist I.I.Rabi had the perfect reaction: "who ordered that?"  It was a confusing new thing and it wasn't immediately apparent to anyone what to do with it.

Well that's sort of the way I reacted to Twitter, I'm still a rather reluctant late adopter, and (like most people) I follow way more than I react or post.

Now we have Mastodon!  Yay.  And my first reaction is: "who ordered that?"

It's an open-source distributed messaging system which has a lot of similarity to Twitter.  Anyone can post anything, link to anything, and comment on anything.  There are likes (favorites) and "retweets".  Pictures and videos.  And there is a content moderation protocol which is designed to keep a certain level of sanity - eliminate child pornography, for example.

People who were unhappy with Elon Musk's acquisition of Twitter have searched for alternatives, and to date Mastodon has emerged as the most popular alternative.  It's vastly smaller than Twitter, and there's such a huge network effect already in place that it's impossible to imagine Twitter being supplanted.  But unlike some moves in the physical world - like moving to Canada - online you can be in two places at once.

The biggest difference is that with Twitter, the whole thing is operated by one company, whereas with Mastodon there is a network of servers cooperatively communicating, but each operated by separate companies or organizations.  Kind of like with email, you have to choose who you want to be "your host", and then after that you can interoperate.  Your host is responsible for managing your account; it could go off the air, taking you with it.  And it decides what to moderate - different hosts have different policies.

I decided to see what the fuss is all about, and opened an account: ole@universeodon.com.  Now I can post there, and read Mastodon messages, and follow people there.  Yay.  But if I want to check on flooding in Santa Barbara, Twitter is going to have way better information.  Way more junk and ads too, but so it goes. 

[Update, yay, dlvr.it supports Mastodon.  So now when I post here, a link is relayed and posted there, just like with Twitter.]

 

ancient history

Tuesday,  01/10/23  03:45 PM

When I started blogging again last December I had a backlog of "interesting stuff" to relay.  Some of it was interesting at the time, but the interestingness was timeframe specific.  Other things are interesting exactly because they provide a point-in-time snapshot.  And some things are just interesting!

June 2020: Facebook and PayPal invest in Southeast Asian ride-hailing giant Gojek.  Biggest country you never think about: Indonesia.  Fourth biggest by population.  An island nation where everyone has a cellphone and most everyone has Internet access.  Gojek are way more than a ride-hailing service, they're on the path to be an "everything app" like WeChat.  This makes sense. 

July 2020: Meet BukuWarung, the bookkeeping app built for Indonesia’s 60 million "micro-merchants".  BukuWarung, meet Gojek.  Imagine signing up 600,000 merchants in one year! 

July 2020: Visual Capitalist: The global inequality gap.  As always from them, great data, and great to see visually.  

BTW Indonesia's 280M people are near the top of the third world in most categories.  Ahead of China (#1) and India (#2) in many of them.

One of those point-in-time things, Sept 2020: VDH: The news as we once knew it is dead.  "Why? ... Picture yourself in your 50s in a job where you've spent 30 years working your way to the top, to the cockpit of power … only to discover that you're presiding over a dying industry. The Internet and alternative media are stealing your readers, your advertisers and your top young talent.

Nov 2020: via Digg: A Visualization Of The Largest Cities By Area Around The World.  It's amazing how densely we live! 

Continuing the Indonesia theme: Jakarta packs 10M people into [only] 256 sq miles.

Dec 2020: Capturing the great conjunction.  As Jupiter and Saturn crossed paths closely, the ISS crosses between them.  Wow. 

Talk about topical, Dec 2020: Matt Ridley: Why mRNA vaccines could revolutionise medicine.  Note this article was written before all the Covid vaccines became available in Spring 2021.  I remember reading it thinking, hmmm... 

Dec 2020: jwz: [the game of] Life at 50.  A pretty interesting look back at a pretty interesting "game".  I love the Marvin Gardner remembrance, remember Scientific American's Mathematical Games column? 

timeless, Jan 2021: Matt Ridley: the folly of renewable energy.  "If you judge by the images used to illustrate reports about energy, the world now runs mainly on wind and solar power. It comes as a shock to look up the numbers. In 2019 wind and solar between them supplied just 1.5 percent of the world’s energy consumption. Hydro supplied 2.6 percent, nuclear 1.7 percent, and all the rest — 94 percent — came from burning things: coal, oil, gas, wood, and biofuels.

Jan 2021: Stephen Wolfram: from Assembly Language to a Nobel Prize.  On the calculations which come from Feynman diagrams... 

Feb 2021: Joshua Newman: how to be lucky.  "Most of us think that luck just happens (or doesn't) but everyone can learn to look for the unexpected and find serendipity."  I so believe this. 

Feb 2021: Fry's Electronics closes permanently nationwide.  A sad moment, but inevitable and long-expected.  I spent many a pleasant hour browsing their shelves... 

June 2021: Bill Gurley: Customers Love Free Stuff … But That’s Not Your Problem.  I love Bill's posts, his vision is truly above the crowd.  This was posted at a time of "hot" IPOs, not now, but these times will be back. 

---- so yay, we made it halfway from 2020 to today ---
And to wrap up:
 

July 2021: The Indonesian government announced that they would deliver free telemedicine services for COVID-19 patients with mild symptoms across the country.  Via Bercie Mesko, who comments: "It's only possible for them as they have startups that did the hard work.

 

 

 

ChatGPT re ChatGPT

Tuesday,  01/10/23  05:00 PM

So I linked to an article in Salon criticizing ChatGPT for being "artificial" intelligence.  (Actually, they didn't put it that cleverly.)  And I noted, "I think we should ask ChatGPT to write an article, in the style of Salon, about how AI is not really intelligent."

So:

Write an article, in the style of Salon magazine, about how ChatGPT isn't truly intelligent

Chatbots have come a long way in recent years, with advancements in machine learning and natural language processing (NLP) allowing for increasingly human-like interactions. One of the most popular chatbot models is GPT-3, developed by OpenAI. While GPT-3 can certainly hold its own in conversations and even generate creative writing, it is important to remember that it is not truly intelligent.

It is important to note that true intelligence is a complex concept and is not fully understood. However, a key component of intelligence is the ability to understand and process new information, and make decisions based on that information. GPT-3, like other chatbots, relies heavily on a large dataset of pre-existing text to respond to prompts. It has been trained on a wide variety of texts, including books, articles, and conversations. This allows it to understand the nuances of language and respond in a way that seems human-like.

However, GPT-3 lacks the ability to truly understand the world in the way that humans do. It does not have knowledge of the world outside of the text it has been trained on. It does not have consciousness, the ability to perceive, think, and feel like a human. It does not have the ability to make decisions based on experience, or to process new information in the way that humans do.

One of the key limitations of GPT-3 and other chatbots is that they do not understand context. They can respond to a prompt with a coherent and seemingly relevant answer, but they do not understand the meaning behind the prompt or the context in which it was asked. For example, if you ask a chatbot, "What is the capital of France?" it will likely respond with "Paris." But if you ask, "I'm thinking of a city in Europe that is the capital of France," the bot may not be able to understand the context of the question and give a coherent answer.

Another limitation is that GPT-3 does not have the ability to reason, which is crucial for understanding the world. It does not have the ability to connect new information with existing knowledge, or to make deductions based on that information. This limits its ability to solve problems and understand complex concepts.

In conclusion, GPT-3 is a powerful tool for natural language processing and can hold its own in conversations and creative writing. However, it is important to remember that it is not truly intelligent. While it can simulate human-like responses, it lacks the ability to truly understand the world and make decisions based on that understanding. As we continue to make advancements in AI, it is important to keep this in mind and recognize the limitations of current technology.

I don't know if ChatGPT truly captured Salon's style - the clickbait gotcha is missing - but that's not bad.  Arguably better than what they wrote...

 

Facebook links?

Wednesday,  01/11/23  11:01 AM

Way back in 2009 I began relaying links to blog posts in Twitter, and on Facebook.  At some point I stopped (I think the mechanism broke because of oauth), and then I started back up with Twitter via dlvr.it.  Today more people come here from Twitter than any other way.  But I never restarted Facebook.  I think my Facebook friends are a different group than my blog readers, and maybe using Facebook as a relay isn't expected or desired.  But I'm not sure, so I thought I'd ask:

should I relay links to blog posts on Facebook?

yes
63%

no
3%

um? - don't care
33%

 

(ended 01/31/23)

 

Wednesday,  01/11/23  06:10 PM

Today was cold but not wet.  Another good day for coding, Zwifting, and blogging.

Spent most of the day trying out variations on this diagram.  As usual there appears to be a trade-off between easy and fast.  So be it.

Molly White: In 2022, web3 went just great.  [via Miguel de Icaza]  Certainly a lot of well-publicized hacks and other disasters.  But the overall lack of value creation hasn't sunk in. 

Reading about the FTX implosion I'm stuck by the circular nature of so much of the crypto investment.  Bloomberg notes: crypto banks owe themselves money.

Meanwhile: Coinbase are laying off 1,000 people.  My first reaction: Coinbase has 1,000 people?  What do did they do all day?

Bill Gurley: On the Road to Recap.  He Tweeted "I wrote this five years too early".  I hope a lot of entrepreneurs read it back then, otherwise they'll wish they had... 

Boing Boing: In a challenge to Google, Microsoft is adding ChatGPT to Bing.  Lots of buzz about this, including strong rumors of a further Microsoft investment in OpenAI.  It's interesting to think about what this would look like.  Your chats get turned into a search predicate?  Or the engine does a search, and renders the results for you as chat? 

Miguel de Icaza: to my surprise, jwz's 'CADT model' applies even to large organizations.  Maybe he should have written especially

CNET: South Korean Moon Mission Delivers Devastatingly Gorgeous Earth Views.  Indeed.  What a great time for space! 

Lawrence Person: dear restaurants, shove your damn QR codes.  This wouldn't matter so much if the online menus were simply renders of the paper menus, but often they're inferior and incorrect subsets. 

This is way more interesting than you might think: A Civil Engineer Explains The Difference Between Spillway Gates, And What They Do.  Who doesn't like giant dams, and who hasn't wondered what happens when there's too much water?  I love his 3D printed models, too. 

Agree entirely; Wired: Andor is a master class in good writing.  As enjoyable as the original Star Wars and the Mandalorian, and for the same reasons ... fun characters, great stories.  I especially like the inside baseball on the Empire side.  The force is strong with this one. 

Bonus observation: the special effects are great too; how excellent is that water planet prison?

More, sooner, please: Nuclear power catching the eyes of Silicon Valley's tech billionaires.  I have always been mystified why this hasn't happened sooner.  EVs are all very exciting but still largely fossil fuel powered.  And solar and wind are weak and worthless alternatives. 

xkcd: Biology vs Robotics.  So great. 

Becker's: Amazon will be the biggest threat to health systems' core business.  Yep.  Amazon is the biggest threat to everyone's business, but healthcare will be one of its largest conquests.  The key here is the value add of consumerism. 

Oh but wait: Google introduces chatbot for healthcare providers.  Looks like it's Microsoft + OpenAi vs Google + DeepMind, and so what will Amazon do?  Stay tuned :) 

Well this is interesting: Wolfram|Alpha as the way to bring computational knowledge superpowers to ChatGPT.  This article has some pretty interesting examples where ChatGPT is wrong, and how Wolfram|Alpha can be a resource.  Kind of like a human holding a calculator. 

BTW have to say for the Nth time, Wolfram|Alpha is amazing and weirdly off the radar.

 

 

TGIF

Friday,  01/13/23  09:17 AM

Good morning!  It is Friday.  The 13th.  You have been warned :)

For those who found this post on Facebook, yay.  If you like the links from there to here, or don't, please say so...  Right now survey is running 53% yes and only 13% no, but it's a small sample...

Thanks to Matt Struble for this awesome sunrise series from this morning!  I wasn't up, but seems like I should have been...

Red sky at night, sailor's delight, red sky in morning, sailor take warning ...

<rant>I love the way Apple's iCloud sync finally works, and any pic I take on my iPhone or iPad is automatically synced to my PC.  (I say finally because it was not always so...)  But I hate dislike the way it takes so long.  Why is the polling interval 15 min?  Half the time I end up emailing a specific pic from my phone to my PC because I'm tired of waiting for the auto-sync. </rant>

Have been seriously loving Zwifting.  Three days ago, rode in France (that's Mont St Michel at left).  Two days ago, Japan.  Yesterday, New York central park*.  Meanwhile it was like zero here outside.  If you're a cyclist you should try it, and if you want to do a ride, let me know.  I'm about 2w/kg... 

* BTW love the way the Zwift people have reimagined Central Park with translucent aerial bike paths; great views, and adds some vertical.  NY city planners take note :)

Another thing I'm loving is Feedly.  SharpReader, I hardly knew ye (well, except for the last 20 years :).  Being able to read posts on my iPad and then having them relayed to my PC automatically has been game changing. 

Blog inside baseball - made some changes to my archive.  Hope you like them.  I *still* think this way of archiving old posts is the best I've seen.  Easy to find stuff and easy to browse.  YMMV

And so now we have the ThinkPhone, a Lenovo brand extension to the ThinkPad laptop line they bought from IBM.  Kinda cool; I'd be tempted if I was an Android person. 

CNet is experimenting with an AI Assist.  Well, why not?  It might be more insightful and certainly less biased than their human reporters.  And there will still be human editors. 

Medium creates it's own Mastodon instance.  Huh, interesting.  I'm not a Medium follower or user, but seems like this makes sense for those who are. 

BTW haven't been able to figure out how to identify inbound links from Mastodon.  I'm guessing near zero.  If you got here via Mastodon, can you let me know?

Cool!  The James Webb telescope's first confirmed exoplanet is 99 percent the size of Earth.  It's fiendishly hard to find exoplanets, they are not directly "visible", but have to be deduced from their gravitational effects on nearby stars.  The smaller they are the harder they are to find, which is why most exoplanets found to date are gas giants. 

Visual Capitalist: The Most Important Science Headlines of 2022.  Excellent, I love this.  And yes I know you can't read them here, you have to click through :)  Hard to pick the winner - but DeepMind's announcement of AlphaFold might end up being the most important... 

From NASA: why isn't every year the warmest year on record?  They explain, but you get the feeling they are rooting for climate change to be "real".  Weird.  They're a scientific organization, why not just stick to the science?  Yeah I know the answer. 

This is what NASA should be doing: TESS discovers planetary system's second Earth-size world

Meanwhile: SpaceX flexes might with simultaneous Starship, Falcon 9, Falcon Heavy, Dragon operations

Space is not easy: Virgin Orbit blames launch failure on upper stage anomaly.  The first ten SpaceX launches didn't go perfectly either.  It takes time and lots of hard work.  Glad they are trying and wish them luck! 

At this point no surprise: Tesla beats BMW to become the #1 in luxury sales.  Ten years ago, who woulda thought? 

The pic at right is a BMW EV, highly rated.  A friend of mine just got one - he was a Tesla Model S owner, wanted something new - he likes it a lot but every phrase is a comparison to Tesla.  In my eye it is not pretty.

Sure go ahead and make fun of Elon Musk, but ... SpaceX and Tesla.  And now you think he can't run Twitter?  Stay tuned. 

You've all heard Jeff Beck has passed on...  especially poignant for me was Ottmar Liebert's tribute. "For me he was THE best. His tone, his phrasing, his bending… Damn, he was so good."  It's notable how many great guitarists are singing his praises.  For me he was good but not in the Stevie Ray Vaughan, Eddie Van Halen, Robin Trower class.  YMMV. 

xkcd hits another one out of the park: Outdated periodic table

Okay, so much for a filter pass ... now onward to more CUDA coding!

 

 

Santa Barbara update

Saturday,  01/14/23  10:25 AM

Yesterday I voyaged up to Santa Barbara to check on my boat, and the Yacht Club; you've maybe read about the storms last week, and the washing away of the beach, and the silting of the harbor mouth.  It's quite a thing; last time there was a storm like this was 1983.

So the good news: my little boat is fine - it lives in the harbor - just had to adjust it on its air bed, and take down the Christmas decorations (!) - and the club is fine too - turns out it's waterproof, and designed for this.  (If you're not familiar the club usually sits on a beach, about 50 yards from the surf, and now that beach is gone.)

The bad news is that sailboat racing has been suspended until they can dredge the harbor mouth.  The picture at left was taken last week; that's the end of the breakwater, and the harbor mouth is now so shallow that people are surfing over it!

The weather continues to be so crummy that sailing probably wouldn't have been much fun anyway.  In a few weeks maybe the harbor mouth will be open again, the sun will be out, and we'll be back to our normally scheduled Southern California winter.  Stay tuned!

 

 

Zwift group ride

Saturday,  01/14/23  10:43 AM

So I did my first Zwift group ride, in Neokyo on the Makuri Islands.  (No, don't bother looking this up on Google Earth, these are made up islands and Neokyo is inspired by Tokyo but doesn't exit.)  It was pretty cool, with many of the same elements found in any group ride.

First, the pace was high, much higher than I would have maintained riding alone.  The ride was paced by an NPC, an electronic rider (shown as neon green in the picture) who maintained 3km/w the whole time.  Fortunately Zwift simulates drafting, so with a lower-than-that watt output I was able to stick with the group.  Anytime I got slightly ahead the group caught me - just like in a real group ride :) - and anytime I fell behind I had to work hard to catch it again - just like in a real group :/.  I love the way Zwift simulates the left/right up/down nature of the group, riders moving around, although in Zwift you can't hit anyone and crash, you just sort of glide through them.  As the group overtook single riders we parted slightly to let them through but mostly just rode right through them; no harm no foul.  The "ride ons" and other jabber in the group were also fun, and some people had even mastered commenting while riding, which I have not.

Zwift uses a cool architecture wherein you can have a computer hosting your ride, and use an app on your phone as a "companion" to guide the ride.  The phone can be on your handlebar or in your pocket, and let's you take quick actions like sending messages to riders around you.  I'm still figuring out how to do that while over 2w/kg but it's nicely done, a good use of all the tech.

With lots of rain outside already looking forward to my next ride inside later today.  Ride on!

 

Saturday,  01/14/23  10:55 AM

Wow, rain rain rain.  They tell me we still have a drought, but somehow I doubt it; check out the table of California lake water levels at right.  (Click to enbiggen.)  Getting it all at once like this is going to cause problems though...

Some pretty large lakes are up a lot - Shasta 20+ feet! - but the one that caught my eye was Tahoe up one foot ... that is a lot of water.

Thanks Peter Simons for the chart.  Bonus observation: Newsweek still exists?

Fascinating: How Elon Musk's satellites have saved Ukraine and changed warfare

Am just rereading Red October for the hundredth time and struck again about the importance of communications in war.

NYTimes: The Party's over for us.  Where do we go now?  Conservatives debate the future of the Republican party.  I keep hoping there will be a place for conservative thinking in the Democratic party ... maybe I'm a dreamer, but I'm not the only one. 

I just rewatched The Big Short.  What an excellent movie.  (Based on an excellent book, which it pretty faithfully follows.)  The tragic economic crash of 2008-09 already seems like ancient history, right?  But so many echos of that in the current crypto bubble

When it became obvious (to a few) that mortgage-backed securities like CDOs were going to crash, there was no way to short them; they had to create credit default swaps so they had a way to do it.  It's now obvious (to a few) that crypto tokens are such a house of cards; is there any way to short them?  Do we have to create one?

I was reading old posts and came across this one from 2020: Tim Bray: I don't believe in Bitcoin.  He noted the lack of adoption by geeks as a red flag.  He was and is right.  The next killer app on the blockchain will be the first one. 

Bonus note: back then I was blogging at Buck's in Palo Alto.  Just after InTouch was acquired by Teladoc, and just before the Covid pandemic hit.  Wow.  Only three years ago but ... wow. 

Charlie Munger on remote work: Those people are never going back.  Nope. 

My iTunes Match subscription is up for renewal.  I hardly ever use iTunes anymore.  I listen to music via Spotify or LiveOne (formerly LiveXLive, formerly Slacker; it came with my Tesla, I started using it, and still do...)  I'm wondering if I still need it?  And what will happen if I cancel?  LMK if you have a thought... 

Marc Andreessen: Pure, absolute, indescribable magic.  It kind of is, right? 

In 2015 I linked Brad Feld: Hollywood's massive miss on strong AI.  At that time (and maybe still) Minority Report was thought to be the best attempt to represent "strong AI" in a movie.  Now we have ChatGPT, which is pretty strong.  Will there be a movie made?  Yes.  Will it be a good one?  Um... 

Okay.  I read Hail Mary, the new book by Andy Weir, author of The Martian, and thought it was excellent.  And inevitably, it will be made into a movie.  But starring Emma Stone as Rocky?  Um what?  Rocky is a 5-legged alien spider made of rocks.  He "speaks" with musical sounds.  I get the desire to pair her with Ryan Gosling but I do not get this casting.  I'm most curious to see. 

This flying electric boat wants to be the Tesla of the sea.  So be it. 

While I was in Monaco last October - competing in a regatta, yes, that's a story I must tell - I noted that virtually all of the tenders for those huge yachts berthed there were electric.  The wave of the future.

Wow, happy 20th birthday to Safari!  Seriously that makes me feel old.  I so remember when it came out, a real "who ordered that" moment.  At that time the competition was IE (!), and Firefox (remember them? :) 

If you are determined to mine Bitcoin, here's a good solution for you: a Bitcoin mining space heater.  I'm pretty sure Satoshi et al never anticipated electricity cost to be a limiting factor on adoption, but here we are. 

The WSJ: CES 2023: All the Cool, Crazy and Cute Tech We Found in Las Vegas.  In case you missed it, you didn't miss much.  This list is notable for what is not on it - anything groundbreaking.  Maybe the tech for 3D without glasses, I'll put my marker down for that. 

Did you know?  The WSJ is by far the largest circulation newspaper in the world.  Visual Capitalist: The top 25 newspapers by circulation.  I didn't find this surprising, but I was kind of amazed that the circulation was only 697K.  How the mighty have fallen.  The LA Times is 6th on the list, at 142K. 

Well onward into the day ... lots of rain, no sailing, no cycling (except Zwifting later), hence, coding!

 

 

Monday,  01/16/23  10:32 PM

Good evening.  Still raining, but sun on the horizon.  Yay.  Watching football - big queue left over from the weekend - Zwifting, coding, and of course, blogging...

Can't decide whether to synthesize a title for filter pass posts or not.  Guess, not?

A gift from my daughter Alex: Exceptionally Bad Dad Jokes.  They are.  "When does a joke become a Dad joke?  When it becomes apparent."  You have been warned.

News you might use, from Shelly Palmer: the alphabet soup of video monitors.  A perfect example of how scale reduces cost; you can get a great, huge monitor for <$1,000 today, and it's good enough; the best might be better, but it's at the margin now. 

So cool: Wired reports a teenager solved a stubborn prime number 'look-alike' riddle.  Have you ever heard of Carmichael numbers before?  Me neither.  But now we know there are infinitely many of them, and for any X there's always one between X and 2X. 

This is pretty great, from The Discoverer: 10 natural wonders that prove magic exists.  Not sure they are actually magic, but certainly awe-inspiring.  And all on my bucket list. 

The one which might actually be magic is the sailing stone racetrack playa.  No satisfactory scientific explanation has been made for this yet.

This is hilarious: Avatar and the Papyrus typeface.  Ryan Gosling is perfect. 


Too bad I never went: Noma is closing its doors.  (Headline should have been, "Noma no more".)  Somehow the strain of keeping one of these top restaurants at the top tells.  I hope they don't end up like The Menu :) 

For some reason our local theater had billed The Menu as a comedy, so we went.  Pretty interesting, starts as a great sendup of high-end dining, but then...

Washington Free Beacon: the lives of Brian.  (... Johnson, frontman for AC/DC afetr Bon Scott died.)  "The big rock acts have become tribute bands to their younger selves."  That's kinda true, but but not always.  We recently saw Robert Plant and he was great, but in no way a tribute band for Led Zeppelin.  Then again he had quite a solo career after Zep. 

Interesting question: Was Salesforce's acquisition of Slack a bust?  Not sure how the numbers worked out, but they definitely paid a lot: $27B!  How much revenue would it have had to generate annually to support that price?  Or asked another way, how much increase in the value of Salesforce stock?  Yet again, it was primarily a strategic move; Microsoft Teams was emerging as an important competitor, and Salesforce maybe felt they had to be in that market.  Most development teams I know love Slack more, but many are being asked to use Teams because it's part of Office. 

Popular Science considers: Dark Matter, Jupiter's Moons, and more: what to expect from space exploration in 2023.  They had me at Dark Matter.  And the picture of the SpaceX Dragon too. 

Sadly: 'The Science' is ruining science.  "Perhaps the most startling aspect of the story is the sub-headline Nature used: 'No One Knows Why.'"  Well I'll be 'no one': political correctness has intruded. 

Interesting that this was published in Nature, which itself is exhibit A of this trend...

Powerline: The Science isn't scientific.  In regards to Covid...  

Real science: SpaceX USSF-16 Mission.  So cool.  Yes, you must watch! 

Elon Musk: And that's how we will land on Mars.  I haven't gotten tired of watching two stages land together side-by-side yet.  Still feels like I'm watching a good movie.

Apropos: SpaceX signs agreement with US National Science Foundation to prevent Starlink’s interference with astronomy.  "Elon Musk, the CEO of SpaceX, said SpaceX would ensure that Starlink has no material effect on discoveries in astronomy. 'We care a great deal about science'"  Well, good. 

Wow.  Sam Bankman-Fried's secret 'backdoor' discovered, FTX lawyer says.  "Sam Bankman-Fried instructed his FTX cofounder Gary Wang to create a "secret" backdoor to enable his trading firm Alameda to borrow $65 billion of clients' money from the exchange without their permission, the Delaware bankruptcy court was told Wednesday."  That's B as in Billion.  Amazing. 

Brad Feld: Reflecting on Ponzi Schemes.  So yeah, but more interesting, he followed with I don't hate crypto.  I think he took a lot of inbound from fellow investors who don't want the secret to be revealed. 

News I hope you won't have to use: How to Use the iPhone 14’s Emergency Satellite SOS.  Interestingly it only works if you are off the grid, so it's hard to test.  But if you need it, you need it. 

This was not an easy feature to add, I wonder why they did it?  Will people buy new iPhones to get this feature?  Huh.

Wow, what a long post!  I need to stop blogging now.  Okay one more... 

This tiny little capybara travels around via surfboard, except the surfboard is a big capybara.  You're welcome :) 

 

 

 

cookies

Monday,  01/16/23  10:52 PM

<rant>

So, cookies.  Since the dawn of Internet time, websites have been able to work around the inherently stateless mode of web browsing by sending small bits of information called "cookies" to web browsers.  Later, when the browser returns to the site (could be 1s later), the cookies are sent back to the site, enabling state to be maintained.  A good solution.

Later, people began exploiting cookies in bad ways.  A second website belonging to, say, an advertising company, could send cookies, then later have them sent back, and this allows them to track you.  And/or, in a variation, the first website could share your information with the second website, as part of presenting ads, or for any other reason.  A bad problem.

And so this somewhat arcane technical issue hit the radar of regulators, and they passed laws: now, in order for a website to send you a cookie, you have to agree to it first.  And so now every website asks you, annoyingly, if you're okay with cookies, and you say "yes" and life goes on.  After you say "yes" all the tracking/sharing stuff that could happen before can still happen.  If you say "no", the website probably won't work and you can't use it.  So this is a terrible non-solution to the problem.

What can be done?  Well first, we shouldn't ask legislators to solve this kind of problem.  They don't understand it, and they pass dumb laws which make life more difficult but don't solve the problem.  We need a simple technical solution to a simple technical problem.

What if browsers had a global option to accept cookies?  You could turn it on or off, depending on your preference.  Oh wait - they already had that option.  So no laws were actually needed.  But yeah if you said "no" to cookies, the website probably didn't work.

What we really want is an option that says, "don't send cookies from anyone else, and don't share my information with anyone".  That would be an easy option to set, it would be sent to every website in the HTTP header, and then it would be on the website to accede to your request.  This is what has happened on phones, for example; you are asked whether it's okay to share your information (once!), and if you say no, then it's no.  Every phone app doesn't have to ask every time.  No new laws needed.

In the meantime, we all have to say "yes" every time we visit a website.  How great is that?  And it doesn't solve the problem, because we always say "yes".  Well it isn't our biggest problem, but it's another example of a dumb law that makes our lives worse.  Onward!

</rant>

 

 

Happy Rabbit Year!

Monday,  01/16/23  11:09 PM

Happy Lunar Rabbit Year!

As helpfully explained in this article in the South China Morning Post, this holiday is celebrated in China, Indonesia, and many other countries; over 2B people celebrate the new year this way!  And so now we are in the Year of the Rabbit.

These old traditions are cool - I love thinking about what people were thinking about when they were created.  They didn't understand about the Sun and the Moon and the Planets and the Stars, but it was easy to think they ruled our lives.  Any sufficiently advanced technology is indistinguishable from magic. 

And I love thinking about the fact that we US people are so Western centric.  The face that billions of people - way more than us - are celebrating a new solar orbit differently doesn't enter our thinking.  (Well, most of the time it doesn't...)

新年好!

 

Wednesday,  01/18/23  10:14 AM

A quick filter pass ...

I'm having a weird problem with my mouse - it's double-clicking at random.  After nearly five years and approx 1B clicks, the clicker is breaking.  So be it.  What's interesting is how many "hidden" features are uncovered by this random behavior.  There's no standard for revealing what might happen on a double click (and actually, with the latest no-affordances crummy design ethos, a disappearing standard for revealing what might happen on a single click too).  So you do it, accidentally, and see what happens ...  it does not however blog automatically.  Yet.

Today's picture is lifted from the Tour Down Under cycling race, first of the season, and first time in three years this event is being held.  Yay, the "normal" pro cycling calendar is back.

Lots of non-news today.  There was a report about Apple's AR/VR headset, and everyone went crazy parroting everyone else's report about it.  And it's all a rumor, nothing real.  Shows you that anytime you know about something in the news, you know they got it wrong

(BTW this is not a picture from Apple, just someone's guess ... but feel free to share!)

Ann Althouse: Honoring Martin Luther King Jr. with a hilariously/tragically bad monumental sculpture.  It is indeed tragically bad.  I honestly debated whether to post this picture - this is a G-rated blog - but decided you would want to see it if you haven't already.  WTF, right? 

Dave Winer is working on a new tool called Feedland, "a feed management system for individuals and groups".  I don't get it yet.  I do think RSS is the bees knees.  Not sure what a feed management system would do for me.  But Dave often sees around the corner, I'm staying tuned. 

Now becoming apparent: Twitter was the ultimate cancellation machine

Meanwhile, and interestingly, a lot of third-party Twitter clients have had their credentials revoked, allegedly for violating Twitter's terms of service.  Reading between the lines, likely they displayed Tweets without ads, thereby depriving Twitter of revenue.  So yeah, it sucks for them, but so be it.  I would agree their communication has been curiously lacking; the Elon Musk regime have mostly explained the changes they've made. 

xkcd: Sunspot Cycle.  Feels like I link every new post, but then again every new post is so good

This you have to watch: Drone Dives the Full Height of the Burj Khalifa.  Amazing.  [ via Kottke

I have not heard, does this building move?  In the wind?  What is it like to work on the higher floors?

Ottmar Liebert re-visits CDs.  "the album in a store is simply a container for the ideas of the musicians and producers".  The challenge for artists today is how to capture that value, when the incremental cost of distribution is $0 online.  His last ten albums in the last ten years have been distributed online, and he's still doing it, so somehow it must be working... 

Interconnected: filtered for ants and laws.  A meanderingly thoughtful post: 

  1. Let’s say we could chat with ants. Could we trade with them? What would we want from them?
  2. Let’s say you want to stop eating cows and chickens, and instead eat insect-derived protein ... Is that ok?
  3. Corporate insecthood.  In short: people treat almost everything as people, at least a bit, including companies.
  4. A special court in Ancient Greece for holding accountable lifeless things

 

 

Hello CUDA - GPU series #1

Saturday,  01/21/23  09:28 AM

Hi all, this is the first of a series of posts about CUDA and GPU acceleration.  (Next post here)

For some time I've been aware of GPU acceleration, and NVidia, and CUDA, but it was a bit of a black box.  Recently I've been working on a cool project which has enabled me to double-click on this to understand what's inside the box.

Maybe it would be good to start with an introduction: what is a GPU, why GPU acceleration, who are NVidia, and what is CUDA.



What is a GPU

GPU is an acronym for Graphics Processing Unit.  The diagram at right shows the overall architecture of a modern workstation (aka PC). 

There's a CPU, with several "cores" (maybe 10 or so), and a GPU, with many many cores (maybe 100 or so).  The cores on a GPU are often called Stream Processors, or SPs, for reasons that will be apparent a bit later.

In the parlance of GPUs, the CPU is referred to as the Host, and the GPU is called the Device*.

* A number of terms in the GPU world are "overloaded"; they mean one thing in general, but a specific different thing in this world.  I'll try to call out the specific uses of these terms as we go along.

In addition to the CPU cores the Host has Main Memory (maybe 16GB or so).  This memory is somewhat more complicated than a simple box, but for now we'll treat it as a big blob of storage for data.  The Device also has its own Graphics Memory (maybe 16GB or so, maybe more).  Again, it's more involved than a box, but to start we'll treat it as such.  The Device also has a video interface for connecting one or more monitors.  This was the original reason for the existence of GPUs, but as we'll see more recently they've been used for other purposes.

The CPU and GPU (or we shall say Host and Device) communicate over a Bus.  The Bus is fast (currently about 200MB/s), but not nearly as fast as Main Memory (about 2GB/s) or Graphics Memory (similar, about 2GB/s).


Evolution of GPUs

The history goes back to the earliest days of workstations (PCs).

GPUs began as simple graphics adapters.  CPUs had one or a small number of Cores and Main Memory, and Graphics Adapters simply added a Video Interface to display things on a monitor.  The Video Interface accessed the Main Memory over the Bus.


 

Specific Graphics Memory was added to Graphics Adapters to offload the CPU.  This enabled much faster display of graphics, but also, left the CPU and Main Memory free to perform tasks in parallel to displaying information on a Monitor.


 

GPUs added processors for simple operations, like rendering textures, and implementing "sprites" (small regions of images which moved against a background).  These processors were simple and slow compared to CPUs, but they speeded the graphics experience, and further offloaded CPUs.


 

GPUs added more processors, enabling more complex video operations to be performed on the GPU instead of the CPU.  Unlike CPUs, the emphasis was on performing multiple simple operations in parallel.


 

At the same time, CPUs began adding more cores too, enabling more parallel processing. 

This was partly a reflection of the more complex workloads being performed by workstations - many programs running at the same time - and partly due to computer architecture - it became easier to add more cores than to make individual cores faster.


 

As the complexity of CPU workloads continued to increase and as GPUs became more capable, they began to be used for non-video processing.  They were good for offloading compute tasks which were highly parallel.

Graphics Adapter companies like NVidia began making software toolkits like CUDA to facilitate this use of GPUs.


 

As the usage of GPUs for non-video tasks increased, GPUs added even more processors.

Current GPUs have 100s or processors which can concurrently process 1,000s of threads, enabling highly parallel compute tasks to be accelerated far beyond what is possible on CPUs.


Why GPUs for computation

GPUs optimized for highly parallel computing were perfect for applications like image and video processing, and AI/ML computation.

This diagram illustrates CPU processing without a GPU.  The green arrows represent units of work, with time going from left to right.  Multiple CPU cores can process tasks concurrently, up to about 20 in parallel.  The CPU threads share Main Memory.


 

This diagram shows CPU processing with a GPU.  The blue lines show the CPU spawning multiple GPU threads - the yellow arrows - which run in parallel to the CPU threads.  Hundreds and indeed thousands and tens of thousands of parallel threads can run in this way.  The GPU threads share Graphics Memory, separate from the Main Memory used by the CPU.

The process of starting GPU processing is called "invoking a kernel", where "kernel" is used to refer to the software program running on the GPU (/Device).


NVidia and CUDA and OpenCL

NVidia was founded in 1993, making Graphics Adapters and chips.  In 1999 they shipped their first "GPU", coining the term, and heralding the evolution to come.  In 2003 they shipped a software toolkit called "Brooks" which provided a way for applications to use GPUs for parallel computing.  Then in 2006 they shipped the first version of CUDA, a whole software environment for developing applications which used GPUs.  The initial application was of course gaming, which continues to be an important use case today, but it enabled many other uses as well.  Including importantly, acceleration of the execution of neural networks, which have revolutionized AI and ML applications.

In 2009 Apple formed a consortium with other GPU manufactures like AMD and Intel, and announced the development of OpenCL, and "open" approach to GPU computing closely patterned on CUDA (which was and remains NVidia-only).  Eventually NVidia joined the OpenCL consortium also.

Today you can write applications with CUDA for NVidia GPUs [only], or you can write applications with OpenCL which will run on virtually any GPU [including NVidia].  But CUDA is optimized for NVidia, and NVidia remains the leading GPU vendor.  If you've developed an application in CUDA it isn't too difficult to migrate to OpenCL, because of the architectural similarity.

What is CUDA and how do you use it?

CUDA is an environment with four main pieces:

  • C/C++ compiler/preprocessor (nvcc, based on gcc)
  • Link libraries for many environments (Windows, MacOS, Linux, IOS, Android, ...)
  • Runtime support in GPU drivers
  • Runtime support on GPU device

CUDA programs are written in C/C++, and are compiled/preprocessed with NVidia's C++ compiler nvcc.  There are slight extensions to the language, and source programs are named as .cu instead of .c or .cpp.  A CUDA program contains some logic which runs on the CPU/Host, and some which runs on the GPU/Device.  The Host code is preprocessed by nvcc and then passed through to gcc (the Gnu C++ compiler) for code generation.  The Device code is compiled into an NVidia-specific bytecode called PTX. 

PTX is stored as data inside the generated code.  At execution time the PTX is passed to the Device by the NVidia driver, and then processed by a JIT compiler on the Device.  CUDA programs are linked with CUDA libraries, which implement the CUDA API.  A single EXE results, which contains the Host code as machine instructions and the Device code as PTX.  This architecture is simple - only one EXE file contains both the Host and Device logic - and enables support for a wide variety of devices, since the Device itself interprets and executes the Device code at execution time.

NVidia provides great integration for CUDA programs and nvcc with many development tools, including importantly Visual Studio and XCode.

It's worth mentioning that there are many libraries available which implement CUDA "under the covers" for programs written in other languages, including importantly Python (CuPy, PyNum, PyCuda, PyTorch, MXNet, and TensorFlow), C# (for Windows), Objective C (for IOS), and Java (for Android).  For many applications it isn't necessary to delve this deeply into CUDA programming; you can just pick the right library, use it, and happily take advantage of GPU acceleration.

 

But let's say you want to write CUDA directly?  How does that work?  Glad you asked .. stay tuned for the next installment of this series, where we'll work through some real-world examples.

 

 

Saturday,  01/21/23  11:47 AM

A lazy Saturday - watching football, blogging (about GPUs!), and wishing the weather was warmer and I could go sailing.  Last year this time I was in Florida, sailing in the Southernmost Regatta; a great time, but already fading into a distant memory.

At right, a start in the subsequent Florida Midwinters, in St Petersburg; 70 boats!  And warm weather in January :)

Well this is interesting: The Big Lie about Bike Helmets.  "As I dug into the history of these humble foam-and-plastic shells, I learned that helmets have a far more complicated relationship to bike safety than many seem ready to admit."  I like you decide ... oh, wait, no if you ride in California the State has decided for you ... 

Well anyway I do not wear a helmet while Zwifting :)

Parenthetical grinchy note: this article has a pretty animated image in it, and I wanted to copy it over, and it was a Webp image, and WTF, and so I had to convert it to a GIF, and so I did.  Why do people keep inventing new image formats?  Of all the things!

Asking the important questions: Is it a good idea to micromanage your life with an app?  Um, no.  "the internet is full of project management tools promising to simplify your life. But who do they really work for?"  OTOH, it's easier to install an app than to get something done :) 

so true ...

Well you've been reading the news, like I have: Get Real about Wages.  Yes they are going up, and no they are not going up faster than prices.  Inflation is real.  And the economic news is not good. 

The annual JP Morgan Healthcare conference: The biggest, buzziest conference for health care investors convenes amid fears the bubble will burst.  And that's Salon's take...  I will say I think the bubble is mostly non-companies doing non-things like crypto, and not healthcare.

Peter Thiel goes to where the puck will be: Founder's fund wound down 8-year bitcoin bet before market crash.  Many people will make money from crypto, they were early into the pyramid and got out before the crash.  But many more will not... 

Meanwhile from Medical Futurist: Pharma companies will acquire a bunch of medical #AI startups in the coming months.  Yes, of course. 

Related: Top AI companies in healthcare to keep an eye on.  "Concerning healthcare, artificial intelligence will redesign it completely – and for the better."  Likely, true, even if it doesn't happen overnight. 

Parenthetical note re "to keep an eye on" ... how about "on which to keep an eye" ... grammar!  Maybe ChatGPT will help :)

John Battelle: Is ChatGPT A World Changing Technology? (And Will We All Become "Centaurs"?)  By Centaur he means, humans paired with AI.  Well we all carry a mobile phone now, how soon before it is paired to our brains? 

Wrapping up with important information: How to take a selfie with a Quokka.  How have I lived so long without knowing about Quokkas?  An adorable cross between a Koala and a Kangaroo!

 

 

Hello CUDA: coding - GPU series #2

Sunday,  01/22/23  04:18 PM

Continuing the series about CUDA and GPU acceleration, let's talk about CUDA coding.  (Next post here)

How does it work?


To start, let's take a quick example.  Let's say we have two big arrays, A and B, and we want to create a third array C, such that for each element c = a2 + b.  This is a vastly parallel problem, perfect for GPU acceleration.  Here's how this could be done in a vanilla C program:

The main() function is pretty simple; it allocates the three arrays using the standard C runtime function malloc(), initializes the first two arrays A and B, then calls a function called domath() to do the actual computations into array C.  And then it deallocates the arrays using the standard free() function.

The domath() function consists of a simple loop through all the array elements.  It calls a subfunction called mathalgo() which performs the actual computation. 

This vanilla C program is single-threaded; all the computations are executed on a CPU core in a serial fashion.  Let's see how this could be accelerated.  Here's the same logic implemented in a "chocolate" CUDA program:

There are relatively few changes and poof! we have a CUDA program.  At the top, we included two new headers which declare the CUDA API.  The main function now has a scope qualifier, __host__,  which designates the function to be Host code.  This is the default, but for this example we made it explicit.  The storage allocation calls to malloc() have been replaced by calls to cudaMallocHost().  They work the same way, allocating a region of storage, but the storage is accessible to the Device as well as the Host.  We'll talk about memory in more detail a bit later.

The domath() function is now called with two extra parameters preceding the usual ones, enclosed by triple angle brackets.  This is a CUDA extension for "invoking a kernel", that is to say, for calling a function from the Host which will run on the Device.  This call is followed by a new call to cudaDeviceSynchronize(), which halts the CPU thread until [all of the] GPU threads have completed.  And then the storage is deallocated with cudaFreeHost() instead of free().

Moving up to the domath() function, you'll note it has a scope qualifier of __global__.  Global functions run on the Device, but are callable from the Host.  Under the covers there is a driver call from the CPU to the GPU which passes the bytecode of the kernel and causes it to be executed.  The mathalgo() function has a scope qualifier of __device__, which means it is Device code and only callable from other Device code.

Let's go back and look at those new parameters on the domath() call.  They define the number of parallel processing threads to be created on the GPU.  We'll talk about those in more detail a bit later, but for now, just know that this creates 1,024 x 64 = 65,636 threads!

The loop in the domath() function has been changed a bit too.  If we didn't change it, we'd simply execute the same loop 65,536 times.  Instead we want to break up the processing into 65,536 pieces.  The CUDA global data items threadId and blockId define unique thread and block Ids for each thread.  Using these values, we can give each thread a different starting point - the value of index - and then step through the array to bypass all the other threads - the value of stride.  So thread #100 processes array element 100, then 100 + 65,536, then 100 + 2 x 65,536, and so on.

With these changes we've done two things - we've offloaded the computation to the GPU, and we've organized the processing to compute 65,536 array cells in parallel.  This is the essence of CUDA coding.

In the next installment of the series we'll break this down into even more detail ... with even more examples ... please stay tuned.

 

 

Sunday,  01/22/23  05:23 PM

Another lazy day, more CUDA blogging, and more football watching.  But also got a chance to go sailing with my granddaughter Orionna, always excellent.  (Thought it must be said, it was not warm.)

Speaking of not warm, a gratuitous pic of an Iditarod musher.  The "greatest race" starts on March 4, and is [hopefully] back to normal.  Can't wait.

BTW, grinchy comment, the Iditarod website is embarrassing.  It looks like someone's high school kid threw it together back in 1995.  C'mon.

Good news: Meryl Streep joins the cast of Only Murders in the Building for Season 3.  The real good news here is that there will be a Season 3, and it's imminent. 

Important info: all Marvel Movies ranked (and where to stream them).  I've never heard of many of them - and am not a Marvel comics person - but quite a few of these were excellent. 

I think I liked the ones which don't rely on pre-knowledge of the Marvel universe best, e.g. Black Panther.  YMMV.

Interesting, .NET now has a native compiler.  Only took 20 years.  When .NET was created the big rival was Java, and Java compiled to bytecode for a reason - the tagline was "write once, run anywhere".  Java never lived up to that, but .NET never had to, it only ever ran under Windows on Intel architecture machines.  I never could figure out any advantage to bytecoding. 

This is hilarious: Ukraine posts a video to reclassify American tanks as "recreational vehicles".  Love all the RV commercial tropes, the boy scout, the cowboy scenes, American nationalism.  Genius. 

BTW how remarkable that a government at war posts a Tweet lobbying another government to sell them arms.  The world has changed.

But some things have not: Henry Kissinger: Why I changed my mind about Ukraine.  "Before this war I was opposed to the membership of Ukraine in NATO because I feared it would start exactly the process we are seeing now, but the idea of a neutral Ukraine in these conditions is no longer meaningful."  He's 99 and still sees around the corner. 

Instapundit wonders, is that even allowed? - LA Times prints story that admits California storms not caused by climate change.  You mean we've had storms all along?  Who knew? 

Can we blame Al Gore?  "Speaking at the World Economic Forum's annual wankfest in Davos, Switzerland, the inventor of the internet and the scourge of massage therapists everywhere went on an unhinged rant that tells you all you need to know about the psychosis currently afflicting politicians all over the world.

Meanwhile: California lost 10M ballots in 2022.  Huh.  I'm a Californian and I vote by mail. 

The ARS Technica Rocket Report.  Includes SpaceX of course - that's a Falcon 9 launch at left ... - but many other companies are space-ing out too. 

I'm sure you've heard, Twitter are now enforcing their API rules to prohibit third-party clients which are competitors; Dave Winer reports it still works for other purposes.  I think the communication was awful but it was only a matter of time before this happened. 

Meanwhile: Mastodon has a lot of new clients; John Gruber shares the obstacle to more isn't Mastodon but Apple.  "Mastodon’s explosive growth in the face of Twitter’s collapse has made it a new UI playground, especially so on iOS."  

For me Mastodon is still a wanna-be.  When Ukraine starts posting propaganda videos there, then we'll consider them an alternative...

And some important news: the absolute best way to make sugar cookies.  A worthwhile investigation, and well reported.  I was not asked to help taste, but would be happy to volunteer next time. 

 

 

recent ancient history

Sunday,  01/22/23  09:26 PM

I restarted blogging after a two-year gap back in December, and so I had a two-year backlog of interesting stuff to share.  (Suitably filtered for after-the-fact relevance :)  I've done it in three tranches, recent history (pre Covid through mid-2020), ancient history (mid-2020 through mid-2021), and now this one, recent ancient history (some old stuff and then mid-2021 through end-2022).  I can't explain the naming either.

This was done while watching football; today for example I watched the delightful slugfest between the Cincinnati Bengals and Buffalo Bills held in the snow of Buffalo.  Playoff football as it should be.  Yes of course my space heater is on :)

Sept 2020: Belarus, once a startup magnet, faces a tech exodus.  This has only become more true in the interim.  I managed a team headquartered in Minsk; they were the greatest people as well as strong hardworking engineers.  I feel for them, not only because of what has happened there, but the impact of the Russian invasion of Ukraine.  Easy to forget that our problems are little one. 

Sept 2020: Still a key issue: Tech's next big task: taking the office water cooler online.  Now that it has become apparent that hybrid work is the new normal, the lack of informal communication remains a drawback. 

Nov 2020: xkcd: Ten Years.  My favorite of all time, amid heavy competition.  I know you can't read it here; please click through and enjoy. 

Dec 2020: Tim Carmody: Long Overdue.  "It’s been a long time. So, first I'll ask: are you well? What’s changed for you since I last wrote?  And the last is the most unusual one, although maybe it should not be so unusual from now on: Have you lost anyone?"  Wow.  Please read and reflect. 

Aug 2021: news you should use: How to properly cut and serve different cheeses.  I'm available if you'd like to practice :) 


Sep 2021: Maximally interesting: the 100 most spoken languages.  Another must-click-through.  What do you think is #1?  Where do you think English ranks? 

Oct 2021: the Larkin Poe cover of Dazed and Confused.  I promise this will have you from the first note. 

Oct 2021: Chris Dixon links Hack Butcher: Composability is to software, as compounding interest is to finance.  I thought it was an interesting analogy so I saved it, but now too it's interesting also because Chris Dixon joined VC firm A16Z and has been among the most vociferous supporters of crypto and "Web3".  I love reading his over-the-top defenses and withering rebukes to critics.  All in the schadenfreude file. 

Jan 2022: Molly White: "Blockchain-based systems are not what they say they are".  Correct.  Chris, meet Molly. 

Nov 2022: Tim Bray: AWS and Blockchain.  "I'm not prepared to say that no blockchain-based system will ever be useful for anything. But I’m gonna stay negative until I see one actually at work doing something useful, without number-go-up greedheads clamped on its teats.

Dec 2021: Maggie's Farm considers SAT tests, which links You Aren't Actually Mad at the SATs.  A proxy for the whole academic system of testing, grading, exams, etc.  It will be most interesting to see whether the pendulum swings back on all this.  Let's hope so. 

Jan 2022: Making the web better, with blocks!  I have mad respect for Joel Spolsky and saved this thinking he was on to something, but a year later, nada.  Software composability is as difficult to find as compound interest :) 

Feb 2022: Tim Bray: Google Memory loss.  It's a true thing.  And evidence increasingly seems to indicate it's a selective memory loss. 

Mar 2022: Dave Winer ponders Evolution in Software.  "The general rule is this, you can't go back in time and redo a decision. What's done is done.

When we discover intelligent life on another planet, the will have computers; CPUs, memory, etc.  They will not have Unix.

Mar 2022: I kept this because it's interesting, and especially for my friend Daniel Jacoby: Ernest Shackleton’s Ship Found After 106 Years.  The underwater footage is amazing! 

Apr 2022: Dave Smith: This exchange continues to haunt me.  Speaking for millions of people: "Thanks Dave"! 

Oct 2022: new Elon Musk essay (in China Cyberspace magazine):Believing in technology for a better future.  "As technology accelerates, it may one day surpass human understanding and control. Some are optimistic and some are pessimistic. But I believe that as long as we are not complacent and always maintain a sense of urgency, the future of humanity will be bright, driven by the power of technology.


Jul 2022: The first astonishing image from the James Webb Telescope.  "It is the deepest image of our universe that has ever been taken."  Wow.  Many more to follow! 

Dec 2022: NASA: 50 years ago: Apollo 17.  "Not long after midnight on Dec. 7, 1972, the last crewed mission to the Moon, Apollo 17, lifted off with three astronauts: Eugene Cernan, Harrison Schmitt, and Ronald Evans."  50 years ago.  Wow. 

Dec 2022: Boing Boing: Age-of-sail style map of the Mendelbrot Set.  Beautiful!  As is the set itself; it continues to be an infinite source of awe. 

And we're up to date!  Yay.  And Onward...

 

 

hello CUDA: moving from C to CU - GPU series #3

Monday,  01/23/23  09:00 PM

Okay, back to CUDA and GPU acceleration.  Today we're going to work through the example I gave in the previous introduction, but in more detail.  (Series starts here, next post here)

A reminder; in the CUDA world the CPU is called "Host", and the GPU is called "Device".

To recap, here's our vanilla C program, cleaned up a bit:

As you can see this is plain-old C code, nothing fancy.  At the bottom we have the main() function which says "hello", gets parameters, allocates three arrays named A, B, and C, initiatizes the arrays, does the math (computes C = A2+B for every array element), and then frees the arrays.

The getparms() function parses a single parameter, the size of the arrays.

If we save this as hello.cpp (yes, that's a link to the source please download), then compile, link, and run it, we get something like this:

So yay.  But since we want to measure performance, let's make a slight change to give us timings.  Here is hello1.cpp:

The changes from hello0 are highlighted; there's a new header timestamp.h which defines a timestamp() function which displays timestamps, and a few calls to timestamp() interspersed into the processing.  If we compile and run hello1, it looks like this:

The timestamp() function displays the cummulative time and time interval in seconds.  So the array allocation took 0s, the initialization .05s, and the computation 4.94s, for an overall time of about 5s.  That's for the default array size of 12,345,678.  Let's add another order of magnitude, by specifying 123,456,789 as the array size parameter:

So now the computation took about 50s.  That's a long time.  But the whole process ran on the CPU, and it ran serially under a single thread.  Let's make it faster!


Okay, so now we convert this into our first CUDA program, hello2.cu (note it is .cu now instead of .cpp):

As before the changes are highlighted.  At the top, we've added two more headers which contain CUDA runtime library declarations.

The array storage allocation was changed from malloc() to cudaMallocHost().  We'll talk about memory in more detail a bit later, but this function allocates storage which is addressable by both the Host and the Device.  Correspondingly, we changed free() into cudaFreeHost().

The mathalgo() function is now scoped with __device__ to indicate it should be implemented on the device, not the host (green highlighting).  And the domath() function is scoped with __global__ to indicate it should be implemented on the device, but callable from the host (also green).  Note that the other functions could be scoped with __host__ to indicate they run on the host, but this is the default so we didn't have to do this.

The domath() function call in main() was changed; we added two parameters enclosed by triple-angle brackets.  This is a CUDA extension to C which designates a call from the Host to the Device (from a __host__ scoped function to a __global__ scoped function).  This call actually spawns one more threads to run in parallel on the device, as we shall see, an action that is referred to as invoking a kernel, where "kernel" is used to describe the code and threads which will run on the Device.

And the domath() kernel invocation is followed by a call to cudaDeviceSynchronize().  This causes the Host to wait for the Device thread(s) to complete.

Okay, so that was cool, let's compile hello2.cu (with nvcc instead of gcc), and run it:

Well yay, it ran!  But ... boo, it took forever - nearly two minutes!  Using a GPU is supposed to be faster, what happened?

Well it turns out that we ran a single thread.  Here's a picture to show what hello2 does:

We invoked the kernel to run a single thread on the GPU.  Any one thread on the GPU is way slower than a thread on the CPU, not to mention the overhead of starting the process and waiting for it to finish.

So what to do?  We want to run a bunch of threads in parallel, like this:

So let's make a simple change to run multiple threads in parallel;  here's hello3.cu:

The highlighting shows the changes.  First, we've added a new parameter to specify the number of GPU threads.

In the main() function the kernal invokation of domath() has been changed, we've specified the number of threads as one of the parameters in the triple-angle brackets.  (We'll talk about the other parameter soon.)

And we've changed the loop in domath().  We don't want to iterate over the whole arrays in each thread, that would defeat the purpose.  We want each thread to process a different piece of the arrays.  The global values threadIdx and blockDim are available in __global__ and __device__ code, and can be used to give the current thread's index and total number of threads.  Each thread therefore starts at a different place in the arrays, and iterates though using the number of threads (the "stride").

Let's say we have 100 threads.  The first thread starts at array element 0, and then processes 100, 200, 300, etc.  The second thread starts at 1, then does 101, 201, 301, etc.  And so on through thread 100, which starts at 99, then does 199, 299, 399, etc.  Each of the 100 threads processes 1/100th of the array, which is what we want.

Great, let's compile and run hello3 to see how this works:

Yippee.  Here we have a few different runs of hello3, with different numbers of threads, specified as the second parameter.

Remember that hello1 ran on the CPU in 5s?  And hello2 ran using the GPU single threaded and took 120s.  With 16 threads, hello3 took 11s, with 64 threads, 2.7s (already better than the CPU), with 256 threads, 1.2s, and with 1,024 threads, .8s!

Let's go back and try that test we did above, where we added another order of magnitude to hello1, and try it with hello3:

Excellent!  This took about 50s on the CPU, and just 7.7s with GPU acceleration.

However there is more we can do to go even faster - stay tuned for the next installment of this series...

 

 

Monday,  01/23/23  11:30 PM

I thought HoloLens was one of the coolest things Microsoft had ever developed and that healthcare would be a prime market for it, but somehow that never happened.  And now it has been abandoned! - Microsoft's recent layoff of 10,000 people (!) included the entire teams behind Virtual, Mixed Reality, and HoloLens.  It's not clear if it was never going to work or just hadn't worked yet; unfortunately some of these things take time.

Also sunsetting social VR pioneer AltspaceVR.

Meanwhile, Microsoft are investing $10B [further] in OpenAI.  So, yeah.

Jean-Louis Gasse: the ChatGPT Maelstrom.  A good survey of the current known issues, including inaccuracy, cost, and availability. 

Seth Godin: Overconfidence and AI.  On the difference between search (e.g. Google) and AI-powered chat (e.g. ChatGPT). 

Bertie Mesko: ChatGPT has been listed as an author on research papers at least four times... while science journal editors are trying to regulate it. 

Nvidia's AI Tech Designs Proteins Never Seen in Nature, Pointing Way to New Therapies.  Excellent. 

Powerline: The Real January 6.  "This is what one expects from a fascist state, not a constitutional democracy.

Hmmm ... Even NPR admits the pandemic 'stimulus' failed.  "Among the failed stimulus programs is the Paycheck Protection Program. This was one of the biggest pieces of the stimulus, totaling $835 billion taxpayer dollars invested, and it was the brainchild of both Republicans and Democrats.

Apple's iconic '1984' Super Bowl ad aired on this day 39 years ago.  Wow.  I so remember this.  "A garden of pure ideology...

Another amazing anniversary: Mozilla.org's 25th birthday.  Many stories, many great links.  Who knew? 

Meanwhile: Radio signal detected from 9B light years away.  It's on the 21cm line of course ... one ping only! 

Huh, cool: Swytch Air expands your world for less than the cost of an e-bike.  It's a motorized hub, battery, pedal sensor, and control unit to retrofit your existing road bike.  An interesting concept... 

The latest Boston Dynamics robot tricks.  Wow.  Remember, this is not a movie.  Does it know about ChatGPT? :) 


But of course: This 32,000-mile Ocean Race has yachts doing research along the way.  "11th Hour Racing Team is one of five teams competing in the IMOCA class of this year's Ocean Race, a six-month dash across the world. The IMOCA-class yachts use foils and can reach more than 35 knots."  Bucket list! 

 

 

2101 a Telehealth Odyssey

Tuesday,  01/24/23  01:06 AM

Just another day in the life of S#a, she thought.  Coffee, shower, clothes (what to wear?), grab B1u, out the door into her pod, and blast to work.  Every day there were problems, but today there was a big new one.  Usually the hard part was data, finding the needles in the haystacks.  But this time the harder part was the people.

The coffee was kicking in.

“B1u, what’s happening?  Tell me everything.”

“Good morning S#a.” B1u said, using that fake Australian accent.  “Today is Monday, Feb 21, 2101.  It’s going to be bright and sunny and freezing cold.  I think that sweater was a wise choice.”  B1u always complimented her clothes but it was nice to hear anyway. 

“And the Lunacy continues!”  B1u used puns wherever possible.

“Yeah I didn’t think it would be a self-fixing problem”, she admitted.  “What’s the latest?”

“Another 200 people have been stricken.  No warning, no clues.  Just a bunch of healthy Luna colonists one day, and a bunch of sick people the next.”  B1u let its voice get deeper, trying to conjure up emotion.

S#a was actually relieved: nobody had expired yet.  But at this rate it was only a matter of time.

“Did we get new data?  Please scan my comms.  Look for LIDS and mark anything about it ‘urgent’.”  S#a knew B1u would have done so already, but by asking she allowed it to feel superior.

“Well of course I did that already.”  B1u tried to sound hurt, but it ended up sounding merely comical.  “I looked for anything with ‘Lunar’, ‘Immune’, ‘Deficiency’, or ‘Syndrome’.  There are 28 new matches, 5 have data...”, B1u paused for imaginary breath, “… of those, 3 are telehealth session blocks.  B3n was copied and has sent a comm.  Would you like to hear it?”

S#a suppressed an irritated “of course”, and said “yes please B1u” as sweetly as she could.  Take that, you aEye.

“B3n’s reply is just: ‘CI now!’”

S#a pondered for a few clicks.  B3n was right.  Their work with CI wasn’t ready for official certification but this was an emergency.  So now the challenge was not “figure out the data”, it was “figure out the people”.

“B1u?  I need your help.  I need to know everything about the Sacher Lab disasters.  Stat!”

***

S#a was a data scientist at the Global Institutes of Health.  Her day job was compiling and analyzing health data to assist policy makers.  But she moonlighted as a disease detective, working with her team to understand, diagnose, treat, and prevent diseases wherever they might surface.  It was a never ending task; no sooner was one infectious agent or internal syndrome identified and treated than another showed up.  Humans were complicated machines and they exhibited complicated symptoms, and they were increasingly living in complicated environments outside of their design parameters.

***

S#a’s pod popped into the landing zone of GIH headquarters with a soft ‘fwoop’.  “Yay, I made it”, thought S#a, and B1u chimed in with synthetic applause.  She checked herself in the reflection of the windows – argh, red hair sticking in all directions, as usual! – and walked into the lobby.

Striding down the hallway to the engineering area, she noticed more than a few stares.  Was it her hair?  B1u’s LEDs?  Or the fact that they knew, and wondered what she was going to do about it.  Knew she wanted to use CI.  Knew it could help, and could damage GIH’s reputation.  And knew C@l and the senior team would be skeptical.

Her workspace was a 10’ x 10’ cube with low walls that extended in transparent fields up to the ceiling.  A soft bluish light beamed down from above.  She sat down at the round table in the center, and plopped B1u in one of the spherical depressions on the table’s surface.  Red chair for a red day, she thought, for a red girl.  Onward.

“Okay I’ve found a bunch of data about Sacher”, B1u said.  “Much of it is public, but some of it is still restricted and I have to confess there might be more you can’t access at all.” 

Always my fault when you can’t do something, S#a thought, but access to sensitive data was only granted to humans.  “Great”, she said more brightly than she felt, “lay it on me”.

“Back in the good old 2050s, a lot of companies started to use AIs for concocting therapies as well as making diagnoses”, B1u began.  “A lot.  There was serious reimbursement which drove serious research and a lot of product development.  And a lot of pretty dramatic clinical results were reported.  A lot.”  B1u’s fake Australian was pretty cute, S#a reflected, making “A” sound like it had about five ‘y’s at the end, and giving “lot” an unwritten ‘w’.

“Good so far”, prompted S#a.

“Yeah, good so far”, said B1u, milking the suspense.  “But only so far.  As you know, AIs are turned to optimize for outcomes.”  B1u paused a beat for suspense.  “In the beginning the tuning was only to get the best outcomes for patients.  But then people figured out they could also tune for other outcomes, like maximizing billing.  A small lab called Sacher developed some AI on its own, starting with a certified model but then modifying it with new data to optimize for new outcomes.  So far so good.”

“Yeah, so far”, said S#a with her best imitation of B1u’s imitation Australian.  “Lab developed AIs were a pretty common thing, and as long as they had appropriate validation protocols, were good science and good medicine.  So what happened?”

“Complexity happened”, said B1u.  “The AIs were good, but the algorithms were impossible to understand.  The results could be validated, but there was no way to check the optimizations.  Sacher decided to optimize for profit, and not patient outcomes.  Nobody could tell, but it made a difference.”

“Wheew”, fake-whistled S#a.

“Exactly.  It was subtle enough to fool the validators, but it affected results.  Sacher’s profitability increased, and they raised money to grow faster.  Did a few high-profile deals with pharmasites.  Attracted attention.”

Easy enough to happen, thought S#a.  She knew as well as anyone that AIs were only as good as their data.  Give the wrong optimization targets, and they could be corrupted.  It could be happening right now, too, except for the ban.  “Show me”, she asked.

B1u raised an opaque screen in the air over the table.  “Here ya go…”

A multiD spreadsheet appeared, which S#a scrolled and spun with brief gestures.  It took a few minutes, but she began to find the patterns.  She was used to data mining and pretty soon had the key bits isolated.  Sacher had deliberately retooled their diagnoses to overtreat patients, slowly at first, then faster.

“This is pretty low on the radar”, S#a marveled.  “How were they caught?”

B1u faked a laugh.  “The usual: someone blabbed.  Make a few people rich, make a lot of people jealous.  One of the engineers, a longtime veteran named J&n, spilled the beans.  The validators came back in, checked the results more carefully, and shut them down.  And once the lid was off the jar, a bunch of other examples were found at a lot of other places.  Lab developed AIs were banned.”

S#a leaned back and pondered.  As usual, going too fast was not the fastest way.  The whole industry of AI-based medicine sat paralyzed by human review.  Humans were expensive and they didn’t scale.  But they could be trusted.

***

“Hey, checking out the LIDS data?” called a voice from the doorway.  B3n was tall and lanky, with long black hair that hung around his head like a curtain.  He grabbed the blue chair across from S#a plopping his aEye into the table.  He glanced at B1u’s screen, and did a double take.  “Hey that’s not LIDS… What is that?”  Realization dawned.  “… ah the Sacher stuff.  Ah, yes.  Ah, yes.”

“We have to get past this ban!”  S#a sighed.  “We have to get C@l and the senior team to agree.  This is the perfect chance to test CI, it’s our best shot at LIDS.  We have to get our swing at bat.”

“Well we’ve got it.”  B3n gave S#a a significant look.  “We are invited to a senior team meeting this afternoon, to give a LIDS update.  P*y thinks we should bring up CI as an approach.  S/he says C@l hasn’t objected.  Maybe they think they can nip this thing in the bud.  Or maybe they think it could help.  Anyway we have just a few hours to get ready, and the fate of the whole Luna colony could be at stake”

***

The senior team.  Dum dum dum, S#a thought.  Why didn’t she wear a suit?  Why didn’t she work out every day, or eat better?  Yeah, right.  And why didn’t her parents have her genetically modified at conception, she smirked to herself.  And strode into Conference Room Galen.

B3n was already there, leaning against one of the walls.  Ten pairs of eyes swiveled as she walked over to stand next to him.  “What’s happening?” she whispered.

“Nothing yet.  Did you run the tests?”

“Yep.”

B3n looked sidelong at S#a, and her sly smile told him all he needed to know.  “And…”

“Yep.”

C@l was one of those people you notice.  Tall, strong, and … severe, she had worked her way up on the business side, selling diagnostic tests to labs and later their companion drugs to clinics, and later still, managing teams selling everything to everyone.  Rumored to be interested in politics, her acceptance of an appointment to lead GIH was still a surprise.  Under her leadership it had become a force, working closely with FDA to decide which diagnostics and treatments were “safe and effective”, and which companies became successful as a result.  Having climbed so high, she was careful not to fall.  For GIH to be the gatekeeper its reputation must be preserved.

“Okay everyone let’s get started.”  C@l flipped up a big screen at one end of the hexagonal space delimited by translucent fields, and the lights at that end dimmed gracefully.  “Today we have a lot to cover, but let’s start with LIDS.  I think you all know B3n, and have probably met S#a from data science.  I’ve asked them to bring us up to speed.”

B3n and S#a had agreed that B3n would start with an overview.  He slouched to the front of the space, determined not to be nervous.  A few crisp panels with his narration gave the recent history; how LIDS had started with a few people complaining of flu-like symptoms, how the telehealth team had been unable to root-cause any bacterial or viral infection, how those affected were not getting better, and how the syndrome seemed to be contagious, affecting more and more of the Luna colony.

“Other than gathering data, what has been tried”, P*y asked.  S/he was a hybrid, round of form and dressed top fashion.  S/he had risen rapidly through the ranks of academia, renowned for skillful politics as well as strong science.

“Well of course telehealth is huge in all this”, B3n replied.  “Despite the three-second delay to Luna, we can effectively treat just about anything remotely.  As usual we’ve recorded all the sessions along with all the peripheral telemetry, and have been looking for correlations.”  B3n paused and scanned the room.  “So far nothing has turned up.  We’ve asked the local caretakers to isolate the patients and form a few groups, so we can try changes in diet and other possible treatments.  It’s too early for that to yield relevant data.”

As always P*y was the designated inquisitor.  “Okay, we’re following procedures, and we’re stumped.  So what else can we do?”  P*y paused and dramatically faced the room.  “What do you think we should do?”

“Um, well…”  B3n looked over at C@l.  “I think, that is we think, that is, S#a and I think, well, we think AI should be tried again.  For this case.  It’s a perfect test of our new CI theories, which …”

“Yes yes we all know about AI”, C@l interrupted.  “We all know.  And we all know what happened in the recent past, and why it was banned.  So what is this CI?  It would take more than just your hunch for me to let GIH out onto that limb.”  C@l swiveled to S#a.  “So … convince me.”

S#a walked to the screen.  Be cool, be calm, she told herself.  Stick to the facts, and let the data speak.  “Okay”, she said, “let’s review.”

“AI is able to take way more data than humans, find way better correlations in it, and make way better diagnoses than humans.  We know this.  The problem is not that AI is not good.  The problem is that AI is not transparent.  We don’t trust it, and so we banned it.  What if we had a way to trust it again?”

S#a stopped and surveyed the room, then gestured to bring up an animated panel that gave the history of diagnostic medicine over the 21st century.  The rise of genomics.  The dramatic advances in AI technology.  The adoption of telehealth, at first for emergencies, then for general medicine, and then as a way to capture data to support AI.  “We got to the point where most routine diagnoses were being made via telehealth, just to digitize the data.”  Her arms waved and the graphs danced.

“Like with anything, at first we didn’t trust it.  The AI was simply there to suggest diagnoses to the physician.  The people made the final decisions.  Kind of like how self-navigating pods started.”

S#a paused.  So far, so good, she thought.

“Of course we all know what happened next.”  A flick of her wrist brought up a new panel.  “Suggestions evolved into recommendations, which were strengthened by probabilities of outcomes.  And recommendations become the standard of care.”  S#a’s charts were simple and compelling.  (“Thank you B1u, she secretly thought.  You’re the best chartmaker ever.”)

“But the datasets which enabled these incredible improvements were getting bigger, the correlations were getting harder to understand, and the algorithms harder to verify.  There was a trade-off between effectiveness and transparency.  Humans could only validate results, they could not verify the algorithms.  And that led to trouble.”

S#a paused.  Here we go.

“Okay, let’s think about humans for a minute.”  S#a had practiced this part with B1u, and was it her imagination or was she slipping into that Australian accent?

“People have a wide variety of motivations.  Scientists do.  Physicians do.  Mostly they focus on their patients and the best outcomes.  But what if they don’t?  How do we keep that from happening?”

S#a had their attention for sure.  C@l was staring, curious.  P*y was gazing into the middle distance.  “It’s pretty simple.  We rely on other people.”

“We don’t trust any one person, but we trust groups.  The bigger the group, the more trustworthy.  This is how democracy works.  Economists study these groups and they have a name for this: Collective Intelligence, or CI.  Most of the work has been done on groups of humans, but it can apply to AIs too.”

“B3n and me and our team have been working on ways Collective Intelligence could enable AIs to watch each other.  We don’t have to trust any one AI.  Instead we trust a group of them, watching each other.”

Okay there it is, S#a thought, there’s the big idea.  But people don’t always believe a big idea.  I have to show them.

“I wanted to share some data from the Sacher situation.”  There was dead silence as one more wrist flick revealed the data she and B3n had been looking at that morning, nicely charted.  “As you can see, Sacher was doing good medicine.  They were diagnosing and treating patients effectively.  But that was not all they were doing.”

One more flick, one more panel.  “Here we see the profitability of treatment recommendations charted against the same data.  At a certain point, improvement in patient outcomes was sacrificed for profitability.  We all know this happened.  Now, what can be done?”

Before the room could react, another flick loaded another panel.

“Here are the Sacher data again, but this time, also charted with what other AI’s have figured out from the results.  You can see the trust factor going down just as the patient outcomes have stopped improving and the profitability of the lab is picking up.  With CI we could have easily detected the Sacher situation.”

Wow, B3n thought, she did it.  It works.

“Here’s what we think.  What I think.  We should use AI to diagnose and treat LIDS.  We should use CI to monitor the AI.  We tell everyone what we’re doing and why.  And we let them check us as we check the CI as it checks the AI.  And we help a bunch of sick people get better.”

S#a stopped again.  What was everyone thinking now, that she was crazy?

P*y cleared hisr throat.  “Well I like it.  I like the thinking, and I like the data.  And I like doing something instead of thinking of reasons we can’t.”

There was a general rustling as the whole senior team took each other’s temperature.

C@l stood.  “Well I like it too!  Thanks B3n for your overview.  And thanks S#a for bringing this to us.  And thanks to your team for refining the idea and testing it.  Great work.”

“You know, it occurs to me, human progress is made this way.  One person has an idea, but they have to convince a group, and the group convinces others.  S#a has convinced us.  Now let’s be the ones to convince everyone else.”

She paused for effect.  “Maybe we can get AI back on track as the future of medicine.”

***

It has been quite a day, S#a thought, as she pushed open the door of the Space Bar.  It isn’t every day you can restart the future.  “So B1u, what’s happening?  Tell me everything.”

 

hello CUDA: a grid of blocks x threads - GPU series #4

Tuesday,  01/24/23  08:25 PM

Another day, another post about CUDA and GPU acceleration.  Now we're going to build on the detailed example from yesterday, in which we multi-threaded a simple example.  We'll extend this to run a parallel grid with multiple blocks of multiple threads.  (Series starts here, next post here)


Previously we saw that we could easily run many threads in parallel:

Up to 1,024!  But what if we want to run even more?

Turns out GPUs enable many blocks of threads to be run in parallel, like this:

Many (many!) blocks of threads can be invoked, and the GPU will run as many of them as possible in parallel.  (The exact number which will run depends on the GPU hardware.)

Let's see what this looks like in code, here is hello4.cu:

As before the changes are highlighted.  We've added a new parameter gpublocks to specify the number of blocks.  If this is given as zero, we compute the blocks as arraysize / gputhreads.

We've specified gpublocks as the first parameter in the triple-angle brackets, on the kernel invocation of domath().  Remember that the second parameter is the number of theads per block, so the total parallel threads is block x threads.

And we've changed the way the index and stride are computed inside the domath() function, so that the array is parcelled out to all the threads in all the blocks.  You'll note this makes use of several global variables provided by CUDA: threadId, blockDim, and now also blockId and gridDim.

So what will happen now?  Let's try running hello4:

Wow.  With 10 blocks of 1,024 threads (10,240 threads overall), the runtime goes down to 2.3s.  And if we compute the maximum number of blocks (by specifying 0 as the parameter), we get 12,056 blocks of 1,024 threads, for a runtime of .4s!  That's GPU parallelism in action, right there.

Furthermore, when we specify an additional order of magnitude to make the array size 123,456,789, we run 120,563 blocks of 1,024 threads, and the total runtime of that is 3.7s.  Way way better than CPU only (hello1) which was 50s!

In fact, something interesting about this run, the array allocation took most of the time; the actual computation only required .16s.  Which is a good segue to the next discussion, about memory, and we'll tackle that in the next installment.

 

 

You changed my mind!

Wednesday,  01/25/23  08:27 AM

You changed my mind! - said no one, ever.

Yeah I get it.  I post, you decide.  And probably you will decide to keep thinking whatever you were thinking before.  At best you will also think, "wow, Ole is so right".  Or maybe, "wow, Ole is so wrong" :)

This is kind of like that other great meme: "Wait! - this is important.  Someone on the Internet is wrong!"

I'm posting this to let you know, I am not trying to change your mind.  I post stuff I think is interesting.  Some of it is stuff I think is wrong.  Some of it is stuff I think is right.  Some of it, frankly, I just don't know.  But I think it is interesting, and maybe you will too.

That bring up an interesting point - what makes something interesting?  (Worthy of your time...)  I claim the essence of "interesting" is "surprising".  New.  Different.  I'm a lot more interested in things I didn't know or thoughts I didn't agree with than with things I did know or did agree with.  This is why I subscribe to the New Yorker and the Economist.  I don't agree with everything they write, maybe not even with most of the things they write, and that makes them interesting!

So cheers.  Hope you enjoy the stuff I post even if you don't agree with it.  Maybe especially because you don't agree with it.  And who knows, maybe it will change your mind ... you might be the first one!

 

 

Comments?

Wednesday,  01/25/23  08:46 AM

Since the dawn of time this blog has never had comments.  At times I've been sorely tempted.  But then sanity prevails and the ease of doing nothing takes over.  It's not implementing comments that's hard - there are many many off-the-shelf solutions - it's the ongoing gardening to prevent spam and extreme views (and abuse).

And yet - it would be cool to have a dialog with you all.  I wonder if there's an easy way to use Twitter or Mastodon as the commenting system?  I'm already relaying links to blog posts there ... maybe if I had a link here that linked to replies there?

Please tell me what you think:

should I implement comments?

yes
26%

no
36%

yeah, use Twitter/Mastodon
36%

Thanks!

[Update: yep, trying Mastodon...]

 

(ended 02/01/23)

 

Hello CUDA: memory models - GPU series #5

Wednesday,  01/25/23  11:09 PM

Hi all, another post in the series about CUDA and GPU acceleration. (Next post here)

So far we've considered a simple example of doing some math on large arrays, migrating from C++ to CUDA, multithreading, and implementing grids of blocks of threads.  We made this operation a lot faster using the GPU to parallelize the processing.  In all this, we've sort of ignored memory.  This time we're going to dive into the details.

As this diagram shows, the Host has Main Memory, accessed by the CPU cores, and the Device has Graphics Memory, accessed by the GPU SP cores.  As we've seen, CUDA makes it pretty easy to ignore this; we allocate storage with cudaMallocHost() instead of malloc(), and yay it is magically shared between the Host and Device.

There are actually three ways to share information between the Host and the Device.  One way is as we've seen; use "Host pinned" memory, via the CUDA API:

Host memory is allocated in Main Memory on the Host, and simultaneously in Graphics Memory on the Device.  The CUDA driver takes care of synchronizing the data between the Main and Graphics memory.  When a page is accessed, if it is "dirty" on the other side (has been modified), a copy of the page is transferred over the Bus.  The Bus is fast but not that fast, so this can slow down processing.  For our particular test we initialized the arrays on the Host side, processed them on the Device side, and then accessed the results back on the Host; not too much back and forth.

A drawback of using Host memory is that all the memory is allocated on both the Host and the Device, and on the host the memory is "pinned", that is, it cannot be swapped out to disk.  Depending on the workload of the Host this is undesirable, especially if the Device processing is long running.

More recent computers and NVidia GPUs support another mechanism which is nicer: "managed" (aka "unified") memory:

In this model memory is allocated on both the Host and the Device, but it is the same memory.  The address space is configured so both the Host and Device point to the same place.  This is super clean because no copying across the bus is required.  (Of course, memory accesses to the memory do take place over the Bus...)

To make use of managed memory is easy, simply call cudaMallocManaged() to allocate it, and then call cudaFree() to free it.  Operationally everything is the same as using Host memory, and it will run [at least a bit] faster.  Also, it will use less memory, and the memory won't be "pinned".  The main drawback is that this is a newer approach that is not supported on every computer with every GPU.  So in practice you might have to test whether Unified memory is available, and if not use Host memory instead.

There is a third approach to Host / Device memory which is older, harder, and more complicated.  So yay.  But it does have some advantages:

In this model, you explicitly allocate memory on the Host (with malloc()), and separately, explicitly allocate memory on the Device (with cudaMalloc()).  The memory allocated in this way is not at the same address and not cross-addressable.  Explicit data transfers using cudaMemcpy() must be performed to copy data from Host to Device, and vice-versa.  So there is more bookkeeping and more complexity.  But.

The advantages of using separate memory are speed - both the Host and Device are allocating local memory - and all transfers are explicit.  And a big advantage is the Main memory is only needed while the Host is using it, and the Graphics memory is only needed while the Device is using it.  So for long-running processes less memory is used overall, freeing more memory for use by other processes.

Let's take our example program from last time and modify it further to support Host or Managed memory, here's hello5.cu:

As before the changes are highlighted.  There's a new global enum gpumem to hold which kind of memory we want to use, and some logic to take this in as a fourth parameter: "h" for Host memory, and "m" for Managed memory.

The storage allocation function (getarray()) and deallocation function (freearray()) have been changed to test the gpumem enum and make different CUDA calls correspondingly: cudaMallocHost() and cudaFreeHost() for Host memory, and cudaMallocManaged() and cudaFree() for Managed memory.

Now when we compile and run hello5, we can compare:

For this comparison, I increased the scale of the test to 567,891,234, which is 4.3BG per array.  (The test uses three arrays, so that's a total of 13GB.)  You can see the Host allocation took 1.3s second longer - 6.2s vs 4.9s - and the processing was slightly longer too - 1.4s vs .9s.  And once again we see that the allocation takes longer than the processing. 

From this we conclude that yes, Managed memory is better when you can use it.  (When the computer and GPU architecture implement Unified storage.)  It's especially better if the Host has other things to do because the memory is swappable.


So how about the Separate memory model?  Okay ... here's hello6.cu which implements that possibility:

Yep, lots of changes, as highlighted.  Using Separate memory is more complicated.  Let's go through all of them. 

First, at the top, in yellow, we've expanded the gpumem enum for Separate memory, and the corresponding parsing of the fourth parameter to support "s" for separate.  That's the easy part.

We've also expanded the logic of gethostarray()  to use a simple malloc() when allocating on the host, and similarly expanded freehostarray() to use a simple free().  Note these functions have been renamed; before we called them getarray() and freearray() becauase the memory was on both Host and Device, now they are explicitly allocating memory on the Host only.

The green highlighting shows new code for handling the arrays on the device side.  New functions getdevarray() and freedevarray() allocate and deallocate memory on the Device.  And the new functions copyhosttodev() and copydevtohost() use the CUDA runtime function cudaMemcpy to copy data between the Host and Device.  There's also a small change in the main() function to pass the Device array pointers on the kernel invocation - dA, dB, and dC.  Note that with Separate memory the Host and Device memory will be at different locations in their respective address spaces.

The biggest changes are in the two new functions highlighted in turquoise - setupsourcearrays() and setupdestarrays().  They each implement a different logic for Separate memory than for Host or Managed memory.  This example is complicated because we are doing both; in any one program, you might decide to use one specific model and then you only have to have that logic.

Here's the high-level logic for both cases:

Host or Managed Memory
- Allocate source arrays on Host
- Initialize source arrays
- Allocate destination array on Host (which is also on Device)
- Invoke kernel
- Deallocate source arrays on Host (which are also on device)
- (process destination array)
- free destination array on Host

Separate Memory
- Allocate source arrays on Host
- Initialize source arrays
- Allocate source arrays on Device
- Copy source arrays to Device
- Free source arrays on Host
- Invoke kernel
- Allocate destination array on Host
- Copy destination array to Host
- Free all device arrays
- (process destination array)
- free destination array on Host

The parts which are common are in red, so you can see the similarity. The parts which are different are blue, as implemented in the functions highlighted in turquoise above.  Yep it's more involved, for sure.  But as a reminder, there are benefits: local memory on both Host and Device, and not trying up memory on both sides when unnecessary.  To see the practical effect of all this complexity, let's run hello6:

Okay!  The Host case - first run - is a bit worse than the Managed case - second run - but not by much.  We saw this above with hello5.  The Separate case - third run - is however quite a  bit faster.  We have more steps because the arrays have to be copied back and forth, and you can see this copying took longer than the actual computation.  But the memory allocations were faster, the memory initialization was faster, and the actual computation was faster, resulting in an overall net which was over a second in savings.

There would be a further benefit if the Host was doing other things while the Device was doing the computation.  During the computation itself no Host memory is allocated, all the resources used are only on the Device.

The conclusion is - it depends.  If you want simple and want to run anywhere, use the Host memory model.  If you know you can use Unified memory, then use Managed memory model.  And if you don't mind the complexity and want top performance, use the Separate memory model.  As a final thought about this, you probably aren't coding CUDA directly unless you don't mind the complexity and want top performance - otherwise you'd probably just be using a library!

So that's it, right?  No there's more ... please stay tuned for the next installment when we talk about ... streams!

 

 

comments on comments

Thursday,  01/26/23  07:42 AM

Thanks for your comments on the comments? survey.  Please vote if you haven't already. 

I've dug into this a bit, and while it's possible to use Twitter as comments for a blog, it's not straightforward.  Wish it was and wish Twitter cared enought to make this easy - would be a great resource for news websites as well as blogs.  (Imagine if every Wikipedia page had one! :)

The perfect experience is, links to your blog posts appear on Twitter.  Each of these Tweets has a Reply button.  Each blog post has a Reply button, which does exactly what the Reply button on Twitter does.  Each of these Tweets has a link to display the Tweet and all its Replies.  Each blog post has a Comments button, which links back display the Tweet and all of its Replies.

Appears to do the parts in bold above I have to dive into the Twitter API, with oauth and application Ids and all the rest of it.  So be it.  I have applied to be an authorized Twitter developer.  Stay tuned...

[Update: yep, trying Mastodon...]

 

Hello CUDA: streams - GPU series #6

Thursday,  01/26/23  08:15 PM

Okay, ready?  Here's another post in the series about CUDA and GPU acceleration.  (And yes, sadly, this is the last one, at least for the moment...)

We've tackled basic CUDA, multithreadinggrids, and memory models.  Now let's talk about streams!


This diagram shows using a GPU with a grid of blocks and threads:

Back in the grids discussion we showed we could run 120,000 blocks of 1,000 threads, vastly accelerating our processing with massive parallelism.  So what can be done beyond that?

As you can see in this diagram, the whole grid is invoked as a single kernel with a single call.  The Device goes off and does whatever it is asked to do - organized as blocks of threads - and then completes.  But what if you want to run more than one kernel at the same time?  This isn't really a way to get more parallelism of the same task, its more a way to do different things at the same time.

When a kernel is invoked, it is always as part of a stream.  There is an implicit "stream zero", and all the things we've done so far in our examples have used this stream.  If we want to run different kernels at the same time, we can define more streams, and put each one in a different stream.  The operations in each stream are serialized with respect to that stream, but asynchronous with respect to other streams.  (There are ways to synchronize to a stream, as we shall see.)

Here's what using a GPU w multiple streams looks like:

In the Host thread, we make multiple kernel invocations into different streams.  Each stream runs in parallel on the Device.  The Host can synchronize with all streams or with any one stream.  And as we'll see, it's also possible for a Device to signal the Host from within a stream.

Let's break up the processing of our example program still further.  Let's provide for the possibility to have multiple streams.  Here's hello7.cu which implements this (as before, that's a link to the source code if you want to try this yourself):

The changes are highlighted above.  First, at the top, highlighted in yellow, we've added yet another parameter to specify the number of streams, with a default of 1.

In the main() function, also highlighted in yellow, we've added a loop.  For each stream we call cudaStreamCreate() which returns a handle of type cudaStream_t.  These handles are stored in an array named cs.  On the kernel invocation, we've added a new fourth parameter to specify the stream.  We didn't do this before, and the default was "stream zero".  Specifying a stream means all the processing done for that kernel - the whole grid of blocks of threads - all happens within that stream.  Invoking another kernel into a different stream means they will run asynchronously relative to each other, sharing the Device resources.

As before, after all the kernels are invoked into all the streams, we call cudaDeviceSynchronize() to wait for them all to complete.  If we had wanted there is also a function cudaStreamSynchronize() which synchronizes the Host to a given steam.  In this case wanted to wait for all of them.  And then finally we have to call cudaStreamDestroy() to clean up the resources for each stream.

Okay so that's great, we're running multiple streams, but now we have to divide up the work between the streams, otherwise they'll all just do the same thing.  These changes are highlighted in green.

We've added two new parameters to the domath() global function which is the main line of the kernel, to pass the stream index, and the number of streams.  These values are then used in the calculation of index and stride inside the domath() loop.  With this logic each stream processes only its portion of the blocks.

One other change, not directly related to streams, is highlighted in turquoise at the top of main().  We added a call to cudaGetDeviceProperties(), so we could display some interesting information about our particular GPU, like the kind of Device, amount of Graphics Memory, number of stream processors, and number of cores per processor.  This call can also be used to determine if the Device supports Host and Managed memory models.

Okay, well that wasn't too difficult.  Let's try running hello7, what will happen?

The first of these runs used one stream, so this is exactly like when we ran hello6 before.  The second run broke up the processing into 10 streams.  You can see, the number of blocks in each stream is one-tenth of the number of blocks in the first run, so we had 10 streams of 55,458 blocks of 1,024 threads.  This was faster, by 1.1s.

Once again we can see, the allocation and initialization of the arrays, and copying them back and forth to the Device takes far longer than the computation itself.  Marvel at the fact that in the second run, the entire computation on three 5GB arrays took 0.26s.  This is GPU acceleration in action!

Also note the information displayed about the device.  This particular GPU is a NVidia GeForce RTX 3080, it has 16GB of Graphics Memory, 58 stream processors, and 1,536 cores per processor.  No wonder it can process so many operations in parallel!


There's one other thing I wanted to share with you.  CUDA has the ability to call Host functions inline in a Device stream.  This is useful for a bunch of reasons, such as if you want to know when a kernel has completed processing of some data, or needs more input.

The following sample program hello8.cu includes a simple example of this:

I haven't included the entire source listing, because the changes are only at the bottom.  Highlighted in yellow, we've added a call to cudaLaunchHostFunc().  This call specifies a stream, and a Host function to be called, in this case streamdone().  You can also pass a parameter which is in turn passed to the Host function.  The streamdone() function simply writes a message, but of course you could do more, such as signaling a Host thread to take some action.

Now when we run hello8, it looks like this:

As planned, when each stream completed its kernel invocation, the next stem in the stream was calling the Host function, which wrote out a message.  In this case the streams completed in the order they were started, but that wouldn't necessarily be the case if the workloads were different.

So that's it!  At least for now :)  Hope you enjoyed this series, and please let me know if you have comments or questions, and especially corrections...  good luck parallelizing!

 

Thursday,  01/26/23  09:01 PM

This picture is called "straight lines".  Whoa.  Almost feel the pull of local gravity bending the light...

Slashdot: what happens when an AI generates designs for PC cases?  This!  Excellent... 

I wonder about the design constraints ... would these be buildable, and would they work (cooling, power, etc.)?

MarkTechPost: Top innovative AI powered healthcare companies.  Such an explosion of these, some are doing great work and creating value, others will be gone with nary a mark left.  Investor appetite for these is high. 

For example: MIT researchers develop an AI model that can detect future lung cancer risk.  Early detection translates into better outcomes for patients. 


This makes so much sense: Smart TVs as digital health devices.  All you need is a camera+microphone.  The pointing device on modern remotes is more than good enough for input. 

And AI in medical imaging market projected to reach $20.9B by 2030

I think I mentioned this already: Microsoft expands 'multibillion dollar' deal with OpenAI, creators of ChatGPT.  $10B.  They also just laid off 10,000 people.  That $10B might have paid the salaries of those 10,000 people for 10 years.  But this is clearly strategic, and some of the things Microsoft decided not to keep doing - like AR/VR - are not. 

Fox: California reels from string of mass shootings despite having some of the strictest gun laws in US.  So, we need even more laws, right? 

Jeremy Clarkson: We're in the midst of a coup. Who the hell’s behind it?  Huh. 


Brad Feld reviews Please report your bug here.  "I hope there are a lot more books like this. It balances startup stuff with the cynicism of the experience while placing it in a fictional world. It unexpectedly merges with believable near-term science fiction, which has a delicious parallel universe theme."  Sounds excellent, on the list! 


Dilbert gives relationship advice.  Hmmm... 

Interesting, from the Financial Times: We tried to run a social media site and it was awful.  They put up a Mastodon server and it was a lot of work.  And also, the legal, security, and reputational risks.  

Yet another reason, if any were needed, why I'd rather not host comments here.

Erik Sink continues his investigation into "native AOT" for .NET.  This time he considers storage allocation and objects.  Once again we see how much easier it is to have managed code and garbage collection :) 

Meanwhile, Tim Bray releases Quamina, an open-source pattern-matching library.  Glad to see he's keeping busy :) 

From the pages of history: Gorbachev and Louis Vuitton.  "Can there be any doubt who won the cold war?

 

 

(NY 5/13/18 - Cheese and Me)

Friday,  01/27/23  10:13 PM

 

Of course

 

V;DW

Friday,  01/27/23  10:16 PM

The Internet has evolved a perfect shorthand for something which is a bit too long or detailed: TL;DR.  It means, Too Long, Didn't Read.  But it also means sorry I'm not taking the time to dig into this.

Interestingly, this has also come to stand for a summary; here's the TL;DR, meaning, here's the essence, the takeaway.  I love and use it.

And so now we have a related shorthand that I love: V;DW.  It means Video, Didn't Watch.  But it really means, no, I don't want to watch a video, just tell me.  And maybe it can mean, no, I don't want to click through to YouTube or some other baggage-y, link-filled, ad-invested website just so I can watch a dumb video.  (Too bad about YouTube, eh?)

And by analogy, it can stand for a summary, here's the V;DW, meaning, here's the essence so you don't have to watch the video.  Whew!

The Internet is wonderful and I love all the great explanations for anything you can think of - and many many things you would never think of at all.  But how crummy is it when the "answers" returned by Google are links to a bunch of videos?  And half of them are someone speaking the answer, when it could simply have been written down.  Anyway.

 

rainy, lazy, Sunday

Sunday,  01/29/23  01:19 PM

Hi all, it is a rainy, lazy Sunday around here ... perfect for a little coffee by the fire, a little coding, and a little blogging.  Also a bit later, a little football watching and a little cycling - Zwifting indoors of course!

Yesterday I made it back on the water - yay! - for the South Coast Corinthian Yacht Club's "Shallow Water Regatta".  My ace crew Jotham Schwartz and I raced my trusty C-15 "It's the Water", and won all four races.  Yippee.  It was sunny and we had a nice breeze, and were able to remember once more how great it is to live in Southern California where we can sail in January.  Without foul weather gear :)  Many wind shifts and many many roll tacks were involved...

Activating filter pass! ...

Yes!  Top Gun: Maverick’s Best Picture Nomination Is Silly - and Perfect.  For once a movie I watched and liked is nominated.  And it's not even politically correct. 


Powerline: The Daily Chart: Harvard's lack of diversity.  Not a surprise at this point, but still pretty amazing. 

American Thinker: Do You Trust the Integrity of Our Elections? If Not, Here Are the Steps We Must Take.  I do not.  OTOH I'm not comfortable will all of these suggestions either; I'm not sure about imposing country-wide standards on local elections, for example. 

Note: you can be an election skeptic without being an "election denier" (i.e., you don't have to believe Trump won in 2022), just like you can be a climate skeptic without being a "climate denier".  Maybe I'm just a skeptic!

American Greatness: The Environmentalist Assault on Civilization.  "The accomplishments of the environmental movement over the past 50 years are undeniable: cleaner air and water, protected wildernesses, and more efficient use of resources. The list is endless and illustrious. Environmentalist values are an integral part of any responsible public policy agenda. But the pendulum has swung too far."  Not only are the problems exaggerated but the solutions are impractical. 

Wow.  Tesla’s Full Self-Driving Beta pool balloons to ~400,000 users.  I'm one of them :)  Yeah we read about individual issues but at this scale the lack of general problems is impressive. 


Hoover: San Francisco falls into the Abyss.  Instapundit comments: Socialism: If you build it, they will leave.  If they can. 

The New Neo: I'm in California.  A close friend of Gerard Vanderleun (American Digest), who recently passed away.  A great blogger whom I linked often.  And now I shall follow the New Neo. 

The Nation: Heavy Is the Head: The British Royals in the age of streaming.  "I have never fully comprehended the monarchy’s popular appeal. Why would hundreds of thousands of people queue for hours and miles for a glimpse of the coffin of a leader they never elected, or get excited about the wedding of two wealthy people they are never going to meet".  Such a puzzle. 

I spend a lot of time in Montecito.  One of my friends asked, "have you seen Harry and Megan?"  And I said "who?"


Everyone wants your email.  Heh. 

Note: this is my first link to a Mastodon post.  Won't be my last!

Jason Kottke: The Enshittification Lifecycle of Online Platforms.  "Here is how platforms die: First, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die."  So how do you break the cycle?  Awareness is a good first step. 


Daring Fireball: The Billions-Dollar VR/AR Headset Question.  "The appeal and utility of all-day AR glasses is obvious. But we are obviously very far away from such devices being possible, at any price."  Hard to see these breakthroughs coming.  We talked about mobile devices but didn't see the iPhone.  We talked about AI/ML but didn't see ChatGPT.  Then suddenly there it is... 

Robert Scoble: Holodeck technology

ChatGPT passes US Medical Licensing exams.  Bet you didn't see that coming :) 


xkcd: Code Lifespan.  Heh! 

In 1990 (right after I was born :) we shipped a Series/1 simulator.  Written in C, it emulated the IBM Series/1 minicomputer, and enabled a large financial application written entirely in Assembly Language (!) to be migrated to newer computer architectures.  I am told it is still running...


Who knew?  M&Ms Are the Best Trolls on the Internet.  "After a long crusade by Fox News’ Tucker Carlson, the brand put its spokescandies on hiatus. It’s a savvy move that seems designed for social media."  OMG I love it. 


I'll leave you with a little Neil Young: Harvest moon.  [via Little Green Footballs

apropos: Lunar Time.  "Not only do you need leap seconds to keep solar time and atomic time in sync, you'll need a different kind of leap second to keep Lunar atomic time and Earth atomic time in sync, because mass distorts spacetime."  Nice.

 

 

link stats

Sunday,  01/29/23  01:29 PM

Here's a little inside baseball ... my friend Gary recently chided me that I link Powerline too often.  Hmmm... I wondered, really?  I like Powerline and they link interesting stuff, but I don't want to become too one-sided.  Not to mention, if all I did was link one site, that wouldn't be very interesting to you - you could simply link them directly.

[Update: Gary clarified, when I post political things I link Powerline too often.  And other "right" sources.  And I think he's right; as I look at this list, most of the political sites are "on the right".  (It could be argued that many of the "mass media" sites are "on the left", but they are not purely political.  So a) sorry I misrepresented your comment Gary, and b) I'll try not to be such an echo chamber...]

So I did what anyone would do - wrote a script to analyze my links.  Here's the top 100 domains in 22,446 links from 3,443 posts:

domain total 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2020 2022 2023
w-uh.com 2972 320 191 155 49 28 461 456 336 276 1 37 197 201 93 25 32 27 87
feedproxy.google.com 1057 69 302 180 241 37 142 43 20 4 15 3 1
velonews.competitor.com 373 109 100 111 49 4
boingboing.net 360 57 60 30 15 2 10 28 18 10 9 24 42 23 2 19 5 6
wired.com 317 105 66 30 5 1 18 31 17 10 6 8 4 2 2 6 6
engadget.com 283 25 27 5 2 24 52 34 19 6 28 27 25 1 4 4
cnn.com 280 95 91 11 5 1 18 13 5 3 6 14 7 8 1 1 1
velonews.com 238 1 15 4 83 127 3 4 1
nytimes.com 226 56 36 9 3 30 22 16 12 3 9 15 5 2 6 2
kottke.org 216 7 2 2 40 40 30 15 12 25 18 6 3 10 2 4
amazon.com 215 49 33 16 3 24 24 10 3 15 8 21 4 4 1
feeds.feedburner.com 201 21 22 7 4 122 25
radio.weblogs.com 190 120 60 10
youtube.com 173 1 3 22 32 31 38 4 6 10 11 1 10 2 2
powerlineblog.com 167 1 12 8 5 47 39 18 16 1 2 2 10 3 3
pajamasmedia.com 157 60 64 21 12
scripting.com 150 18 6 1 7 2 21 25 21 16 5 15 4 3 2 1 3
facebook.com 149 1 3 6 9 23 41 58 5 1 2
inhabitat.com 149 23 19 22 11 20 21 14 14 2 3
daringfireball.net 147 6 11 4 34 27 28 9 1 11 4 2 4 3 3
google.com 141 19 4 3 9 38 4 13 6 25 13 6 1
techcrunch.com 140 1 21 17 15 9 3 22 31 16 1 4
blogs.law.harvard.edu 122 26 22 1 12 20 10 10 1 10 10
rss.slashdot.org 117 2 2 1 16 30 22 13 7 11 7 6
sports.espn.go.com 116 41 26 4 1 1 29 12 2
gnxp.com 115 58 26 13 1 16 1
economist.com 113 7 10 17 2 1 39 17 3 1 1 1 13 1
americandigest.org 112 9 11 5 1 23 18 12 9 6 9 2 2 3 2
apple.com 112 37 30 10 1 1 13 5 2 5 1 3 2 1 1
en.wikipedia.org 112 2 2 1 2 20 17 10 6 4 11 14 13 5 5
instapundit.com 103 20 33 12 3 1 24 6 4
rss.com.com 99 92 7
salon.com 99 36 10 1 10 5 13 4 2 8 3 1 1 1 4
news.com.com 97 47 32 13 3 2
littlegreenfootballs.com 95 13 22 3 3 2 24 13 6 4 2 2 1
aperio.com 94 11 25 13 5 9 13 3 15
newyorker.com 88 3 1 3 1 20 25 12 6 8 3 3 2 1
news.bbc.co.uk 86 47 16 8 1 7 6 1
tbray.org 86 29 16 4 1 5 6 6 4 1 4 3 1 2 1 3
foxnews.com 84 7 3 24 34 9 1 1 1 1 1 1 1
althouse.blogspot.com 83 8 2 28 25 9 6 1 1 1 2
feeds.powerlineblog.com 82 10 2 6 11 20 14 5 14
news.cnet.com 82 19 37 13 7 3 3
arstechnica.com 80 2 2 1 1 17 6 3 3 2 10 3 17 4 1 4 4
futurepundit.com 80 7 12 11 2 1 18 12 9 5 2 1
pjmedia.com 75 2 14 21 18 4 13 2 1
thetruthaboutcars.com 74 34 31 3 2 1 2 1
rss.cnn.com 72 21 15 11 5 3 6 6 4 1
eyesfinder.com 71 1 2 40 15 13
laughingsquid.com 71 1 1 13 25 16 6 7 2
twitter.com 71 2 4 6 1 2 1 23 10 22
zooborns.com 70 2 28 19 8 2 7 2 2
dilbert.com 69 1 1 1 4 11 23 13 5 8 1 1
businessweek.com 67 18 7 6 2 12 14 4 1 3
codinghorror.com 65 12 4 1 18 17 8 2 1 2
slate.com 65 1 3 11 14 1 3 6 14 4 5 2 1
cyclingnews.com 64 25 8 10 4 7 6 3 1
gizmodo.com 64 17 15 10 7 2 6 1 1 2 3
pvrblog.com 64 15 31 13 4 1
washingtonpost.com 64 29 8 4 5 1 3 3 7 4
feeds.arstechnica.com 63 2 8 15 12 8 2 7 9
archive.scripting.com 62 9 37 16
denbeste.nu 62 46 14 2
iditarod.com 62 7 11 13 30 1
esr.ibiblio.org 60 2 3 13 10 16 4 5 2 2 3
joelonsoftware.com 58 17 8 7 2 11 5 1 3 2 1 1
spacex.com 56 8 7 8 7 12 1 4 2 4 2 1
alwayson-network.com 55 12 42 1
blogs.wsj.com 51 3 24 6 6 1 8 2 1
collisiondetection.net 51 3 9 13 7 8 3 4 3 1
digg.com 50 3 2 6 1 3 10 16 6 1 2
slashdot.org 49 22 19 4 1 1 1 1
feeds.wired.com 48 8 12 13 12 3
cultofmac.com 47 10 14 7 2 1 3 6 1 1 1 1
parislemon.com 47 3 5 22 14 3
scriptingnews.userland.com 45 45
slate.msn.com 41 14 18 9
horsesmouth.typepad.com 40 4 13 10 6 3 1 1 2
popularmechanics.com 40 11 6 2 5 1 3 5 3 4
blognewsnetwork.com 39 12 27
nasa.tumblr.com 39 4 19 3 11 2
online.wsj.com 38 5 2 11 5 3 9 1 2
silflayhraka.com 38 14 16 7 1
doc.weblogs.com 37 11 21 4 1
latimes.com 36 7 3 2 8 4 1 2 1 3 5
russellbeattie.com 36 3 3 6 2 11 3 4 1 1 2
diveintomark.org 35 9 8 1 1 11 1 3 1
telegraph.co.uk 35 4 1 1 5 13 4 2 3 1 1
ventureblog.com 35 30 4 1
newscientist.com 34 13 7 2 1 1 3 3 4
pandasthumb.org 34 7 7 3 9 5 1 1 1
teslarati.com 34 2 4 9 2 11 3 3
caltriplecrown.com 33 7 26
gigaom.com 33 1 3 1 1 1 5 2 1 10 8
jrobb.mindplex.org 33 19 13 1
story.news.yahoo.com 33 20 12 1
command-post.org 32 7 24 1
rottentomatoes.com 32 1 3 2 4 3 8 7 3 1
xkcd.com 32 1 2 1 3 3 1 4 11 2 4
blog.aperio.com 31 7 4 8 5 1 6

Thoughts on this list:

  • Almost all of those sites still exist.  Whew.  But not all of them.
  • I have started linking to Twitter.  Did not do previously, could not follow.  I predict this will grow a lot.
  • Many links to YouTube.  I used to embed videos, it was hard to do, and the technique kept changing.  I just link now, although it's worse for you; the user experience of visiting YouTube is terrible.
  • Other than linking back here, the second most popular link destination is feedproxy.google.com; those are links to blogs that use Feedburner RSS feeds (acquired by Google in 2007).  I should make sure those links point straight back to the blogs, a project for another day.  Google, argh!
  • Would be interesting (and not that easy) to figure out how many of these links are still valid.  In the old days, you could do an HTTP GET and if you got a 404 status, okay, page is gone.  But many sites (including this one?) simply take you to the home page instead.  You get a page, but not the one you linked.  So there would have to be some logic to compare the linking text to the page to see if a redirection took place.

Some other interesting facts:

  • The post with the most links was "catching up" in 2004, with 228.  I'm sure you clicked all of them.
  • 690 posts have zero links!  Many probably think pieces which were not a reaction to an external link.
  • 509 posts have one link, and 282 have two links.

Cheers and onward!

 

Tuesday,  01/31/23  10:09 PM

A crummy day with bad things and bad news.  Rats.  I hope tomorrow is better!

We started watching Poker Face tonight.  Highly recommended, off to a great start.

Questions you might not have asked: What’s going on in the Dutch startup scene?  "Seen through the lens of his Royal Highness, Prince Constantijn." At CES ... 

Well good: Bill would even the playing field for telehealth reimbursement.  One good outcome of the Covid pandemic which will hopefully continue on past it... 


How Amazon's latest pharmacy move lays the groundwork for longer-term play.  "While the offering is not breaking new ground or particularly disruptive to the pharmacy market in the near term, industry experts say the move is worth watching for how it lays the groundwork for the e-commerce giant's long-term healthcare and pharmacy strategy."  So where do they go next?  Hard to see Amazon hiring physicians, right? 

Becker's: Amazon in the headlines: 7 recent healthcare moves.


Doctors, Get ready for your AI Assistants.  "Hospitals have begun using machine learning to help analyze and collect images, and the medical applications are endless."  Seems inevitable - and I think this positioning, of AI as "assistants", is exactly right. 


SpaceX President Gwynne Shotwell awarded 2023 National Space Trophy.  Yay, great to see her getting this recognition, you have to believe she is critical to SpaceX's success ... and cannot be that easy working for Elon Musk! 


‘Nasty’ Geometry Breaks a Decades-Old Tiling Conjecture.  "Mathematicians predicted that if they imposed enough restrictions on how a shape might tile space, they could force a periodic pattern to emerge. They were wrong."  Super cool. 


Requiem for a string: Charting the rise and fall of a theory of everything.  You know what it just always seemed wrong to me - by W=UH - it was just too complicated to be right.  And now we find it is not even wrong

So - how to we tie electromagnetism, gravity, and the strong and weak nuclear forces together?  And explain all those particles?  Maybe that's the wrong question?

On the thermodynamics of complexity.  Whew. 


Brad Feld: AI and the HyperEnabulator.  "When I asked ChatGPT, 'Are you aware how little a sense of humor you have?' they said, 'No, I do not have self-awareness.' So I hope they figure out how to connect to the HyperEnabulator.

Jean-Louis Gassée: ChatGPT: Netscape Moment or Nothing Really Original.  I say, a Netscape moment.  Stay tuned. 


Berci Mesko: There were rumors about that AI could start generating original proteins from scratch.  Now scientists have created an AI capable of generating artificial enzymes.  Wow. 

Clive Thompson: Why is my iPhone suddenly ending my sentences with a period?  A little think piece on the nature of messaging... 


Huh: After a failure 4 months ago, the New Shepard spacecraft remains in limbo.  "More than four months have passed since the launch of Blue Origin's New Shepard rocket ended in failure. No humans were onboard the vehicle because it was conducting a suborbital scientific research mission, but the failure has grounded the New Shepard fleet ever since."  Four months doesn't seem that long ... but yeah, space is hard. 



Did you know about this?  Nothing Phone (2) to be released this year in the US.  "We're developing a smartphone that's more premium than the Nothing Phone (1) and software will be a big focus area for us."  Well, okayyy. 


And finally, news I hope you can use: The World's Most Picturesque Restaurants, According To TripAdvisor Comments

I was actually at one of them exactly a year ago - Lattitudes, in Key West, Florida.  Cannot disagree.

 

 
 

Return to the archive.

Comments?