We Just Found It on the Doorstep
Casey:
So there's a handful of people that I'll schedule like a monthly FaceTime call with.
Casey:
And most of them, you know, almost all of them are, in fact, all of them are not local.
Casey:
And then there's a handful of people that I try to do lunch with like once a month.
Casey:
And my good friend, Sam, he and I had our monthly lunch today and we went to a place, but I had a problem.
Casey:
during lunch there was music outside which was good but the jack brown burger joint trolled me because they were playing a fish album during the entire lunch and all i could do was think about how happy you would be if you were there or if you at least knew this was happening as it was happening in retrospect i should have like facetimed you or something just to be like listen to this junk and
Casey:
The worst part of all, the worst part of all, you could tell me what songs I heard and I would probably be like, sure.
Casey:
But there were a couple of songs that even I recognized as fish songs.
Casey:
Like, you know, it not only did it have the vibe of fish, but I like had heard the songs before and recognize them.
Casey:
And I forget which ones they were.
Casey:
The only one I know by name is bouncing around the room.
Casey:
And that was not it.
Casey:
Um, but there was one, and I'm sure this is describing half a fish catalog, but where it was repeating the same phrase over and over again.
Casey:
And it was very catchy.
Casey:
That's fairly common.
Casey:
Yeah, exactly.
John:
Um, but anyways, David Bowie, maybe I'm kind of proud of you that you recognize that it was fish.
John:
Cause I'm not sure I could do that.
John:
I don't know any of their songs.
John:
Maybe I could pick it up based on vibe, but I don't think I've even heard that much.
John:
So like you must, what, when are you listening to fish so much that you recognize songs?
Marco:
I can give you a good heuristic, John.
Marco:
Um,
Marco:
If you hear a song that you don't recognize, you don't think you've ever heard it on the radio before, look around the room.
Marco:
And if the whitest guy in the room is slightly bopping his head to it... That's me, though.
Marco:
And zero other people are, there's a decent chance it's Fish.
John:
It could be anything.
John:
I don't know.
John:
I think my chances of spontaneously recognizing... You're at a restaurant and there's music playing in the background spontaneously recognizing Fish.
John:
I think my odds are very low.
John:
I guess I'd have to look for somebody with the little red blood cell pattern on their clothing.
John:
And if they were bopping to it or something, then I could figure it out.
John:
Right.
John:
Now I know what that is.
John:
That's the one thing I can recognize.
John:
Marco taught me what that is.
John:
And now I see it on people's license plate surrounds.
John:
I'm like, oh, one of them.
Yeah.
Casey:
Anyway, the worst part, Marco, the worst part of this entire lunch, and about the only bad part of this lunch, because I really do enjoy Sam so very much.
Marco:
Wait, can I guess?
Marco:
Yes.
Marco:
Did you like some of it?
Casey:
It wasn't bad.
It really wasn't bad.
Casey:
So we have a new member special.
Casey:
We have gone back to the well and we have done another ATP tier list.
Casey:
John, can you remind us all, what is a tier list?
John:
I can't remind you all because everybody knows what a tier list, except for old people who listen to this podcast, but then they've also heard the specials before.
John:
So it's a tier list.
John:
You rank things, you put them in tiers.
John:
Multiple things can be in a single tier.
John:
The top tier is S. Why?
John:
Nobody knows, except somebody knows, but we don't really care.
John:
The point is it's better than A. It's a tier list.
Marco:
And it's grading.
Casey:
It's like A through F and then S on top of A. And we graded all the iPods, or at least most of them anyhow.
Casey:
And so I am pretty confident that we did a pretty good job on this.
Casey:
There was a little bit of horse trading involved, but I'm pretty happy with where we ended up.
Casey:
We made a handful of people that we know very upset.
Casey:
And I'm sorry that you're upset, but we're right.
Casey:
So if you are curious to hear this tier list or any of the others, you can go to atp.fm slash join.
Casey:
And if you join even for but a month, but you should do more, then you can get to all of the member specials.
Casey:
We've been trying to do one a month for what, like a year or two now?
Casey:
I forget exactly how long it's been, but we've racked up a fair number over the course of the last several months.
Casey:
There's a handful of tier lists.
Casey:
We do ATP Eats, among other things.
Casey:
There's a lot of good stuff in there and some silly stuff.
Casey:
So ATP tier list.
Casey:
And if you are a member and you would like to watch the tier list happen, which is not required, but is occasionally helpful, there is a super secret YouTube link in the show notes for members where you can go and watch it on YouTube as well.
Casey:
Please do not share that.
Casey:
It's the honor system, but you can check it out there as well.
John:
It's in the show notes for the member special.
John:
Sorry.
Casey:
Yes.
Casey:
Thank you.
John:
When you go to the iPod tier list member special, look in the show notes.
John:
The first link will be the YouTube video.
John:
I like these tier lists because they always, we always seem to, I think they reveal something about the things that we are ranking.
John:
Something that we, at least I usually didn't know going in.
John:
You think, oh, you're just going to rank them and people are going to,
John:
you know, have controversies over which is good and which is bad.
John:
But I think in the end, when you look at the whole tier list and you kind of look at the shape of it and how it's worked out and how contentious the choices would be, you learn something about it.
John:
Like I think our connectors tier list was like that.
John:
And I think the iPod one turned out like that too.
John:
And the reason we made some people angry is because we know a lot of really weird tech people with very specific and often very strange opinions, specifically iPods.
Casey:
I think you could also say incorrect opinions.
John:
They have their reasons.
John:
Most of them have reasons that make some sense.
John:
I think one of the things we learned, not to spoil too much, is that a lot of people have... All the things that we put into your list, people can have personal sentimental reasons for.
John:
We all certainly do.
John:
And listeners do as well.
John:
And I think iPods, more than anything we've done before, the people who had opinions, they swayed heavily into the sentimental.
John:
Right?
John:
It was, you know, it was like, this was my first iPod.
John:
I really love this thing, right?
John:
Much more so than the past tier list we've done.
John:
So I think, you know, maybe the iPod at that point was the most personal product Apple had ever made.
Marco:
Yeah, I mean, honestly, like, I had a lot of fun with this one because, like, even though, like, I hardly ever really used iPods because by the time I could really afford decent iPods, it was only very shortly before the iPhone really took over.
Marco:
So I only really had a couple of years with iPods, but those couple of years, I really liked the iPods and this was actually fun.
Marco:
And for coincidence sake, I happened to have bought a couple of iPod Nanos off of eBay a couple of years back just to kind of play around with.
Marco:
And I took them out the other night after we recorded this episode and charged them up.
Marco:
Well, the ones that we'll accept to charge at least.
Marco:
Charged them up and got to play around with the old iPod Nano and
Marco:
I will just say I stand by everything I said on that episode.
Marco:
Everything.
Marco:
So feel free to listen and tell us how wrong we are.
Marco:
And you too, listener, can pay us $8 a month to yell at your podcast player just a little bit more.
Marco:
So we encourage you to do that.
Casey:
That's absolutely great marketing.
Casey:
Thank you, Marco.
Marco:
And by the way, our membership episodes are DRM free.
Marco:
And so if you happen to use an iPod to listen to your podcasts, we are fully compatible.
Marco:
So you can pay us eight bucks a month to listen to our member content on an iPod if you actually have one.
Marco:
And you can honestly buy one on eBay for only a few months worth of membership fee because they're pretty cheap these days.
Casey:
Indeed.
Casey:
And hey, what would you listen to on an iPod if not a podcast?
Casey:
Well, you could listen to music and you could listen to music on a U2 iPod.
Casey:
And so Brian Hamilton wrote in with regard to the red and black colored U2 iPod.
Casey:
We were wondering, I thought we were wondering on the episode, or certainly there was some mumblings about it on Mastodon afterwards, you know, how did they get to red and black for the color scheme of the U2 iPod?
Casey:
And Brian wrote in to remind John and us
Casey:
about how to dismantle an atomic bomb, which was released November 22 of 2004.
Casey:
And the color scheme on the cover art for that album is red and black.
Casey:
Where were you on that one, John, Mr. U2?
John:
Yeah, I remember it once I was reminded of it.
John:
I mean, here's the thing.
John:
Like I said on the episode...
John:
It's not as if Red and Black became like the iconic colors of the band.
John:
This was one album that was released, you know, obviously at the same time as the iPod as part of a promotional thing, like the iPod, the U2 iPod.
John:
The first U2 iPod was released in like October and the album came out in November.
John:
So it's a tie-in, right?
John:
And then there were future U2 iPods and they were also Red and Black, but at that point U2 hadn't released a new album.
John:
So they're all just tied to this one album.
John:
But they have released a lot of albums and there were future albums and there were past albums and
John:
I can tell you that this one and this color scheme did not become heavily associated with the band.
John:
But that's the reason.
John:
That's why they went with Red and Black, because of the cover of the album.
Casey:
Are you saying that as an assumption?
Casey:
I'm genuinely asking.
Casey:
Are you saying that as an assumption?
John:
No.
John:
Once I was reminded, I'm like, oh, yeah, that's why they did it.
John:
I mean, it's not a great reason, but I'm pretty sure it's the reason.
Casey:
Fair enough.
Casey:
Max Velasco Knott writes in that there's also another feature, and I'm using air quotes here, on the U2 iPod.
Casey:
Max writes, the U2 iPods featured signatures of the band members on the backside.
Casey:
I was fine with the black-red color scheme, but couldn't stand seeing Bono and company on the back whenever I turned them over.
John:
Yeah, I'd forgotten about that as well.
John:
I mean, obviously it's a shiny back end that doesn't show up that much, but if you really just wanted a red and black iPod and didn't care about the band, the signatures on the back kind of messed it up a little.
Casey:
Indeed.
Casey:
Nikolai Bronvo Ernst writes to us with regard to the DMA and Apple's cut.
Casey:
Nikolai writes, I really enjoyed your last show, 593, Not a European Lawyer.
Casey:
I'm also not a European lawyer, but I am a citizen in the EU and wanted to provide a single European's point of view.
Casey:
The DMA has nothing to do with Apple's cut in the App Store or how much money Apple earns from selling their hardware.
Casey:
It only has to do with ensuring fair competition.
Casey:
Citizens' rights to freely choose services they want to use without vendor lock-ins on interoperability, portability, and your own data, which we here in the EU believe belongs to the user.
Casey:
That was a pretty good summary.
John:
A lot of people have written in to say this, but I think people will get hung up with the idea when we talk about, like...
John:
Apple's cut and how the EU is trying to control that.
John:
And they're like, the EU is not trying to tell Apple how much money it can make.
John:
It's just trying to do this other thing.
John:
But the reason it gets mixed up and the reason people send us these emails is because what Apple did to, you know, supposedly comply with the DMA while also trying to prevent competition is
John:
is an application of fees so that we you know okay well the eu says you have you have to allow for competition apple says okay sure we'll allow competition but all of our competitors have to pay us an amount that makes it so they they can't compete with us right and the cut we're talking about is not apple's cut from the its own app store like when you sell through the app store you pay apple some cut it's the cut apple demands from
John:
the app stores and the people selling through app stores that are not apple's own app store that are selling through third-party app stores apple is using money using fees uh to make uh to make the competition less competitive and that's what we're talking about i know it's you can get confusing when we're talking about apple collecting its money or apple having its fees and stuff like that so i think maybe that's the source of the confusion and the other thing is by the way that plenty of countries including the eu uh
John:
do actually tell companies that they can't make a certain amount of money on a certain thing that they do someone wrote in to give us the example of like credit cards like a mastercard and visa the two big credit card networks i think in the eu they uh the fees they charge uh stores to process their credit cards are essentially capped uh and the eu has basically said you you know visa and mastercard own the market you can continue to do that but you can't charge uh merchants any more than point whatever percent
John:
The EU has not done that to Apple.
John:
They haven't said to Apple, hey, Apple, you can't charge more than 10% in your own app store.
John:
They haven't said that at all.
John:
They haven't said anything about what Apple can charge in their app store.
John:
What they just want is more competition.
John:
And Apple is saying, OK, there can be other app stores, but they all have to give us an amount of money that makes it unattractive.
John:
And yeah, we'll see how that flies.
John:
Again, the EU has not yet ruled on the core technology fee and all the other things that they're investigating.
John:
So far, they've only ruled on the steering provisions about how Apple restricts the way apps in its own app store can link out to third-party payment methods.
John:
But we'll see how those other decisions come out in the coming months and years.
John:
I don't know how long this is going to take.
John:
But right now, it's not looking good for the core technology fee.
John:
Let's say that.
Yep.
Casey:
We asked for mostly tongue-in-cheek, but we asked for Brexit-style names for Apple leaving the EU.
Casey:
Jared Counts was the first we saw to suggest I leave.
Casey:
Frederick Bjorman suggested Axit and provided a truly heinous but hilarious, I presume, AI-generated image for this.
Casey:
My personal favorite, though, was suggested several times.
Casey:
First we saw was from Oliver Thomas.
Casey:
I quit.
John:
Yeah, that's pretty good.
John:
The I Leave and I Quit... We had many more suggestions of these.
John:
I thought these were the top three.
John:
The I Leave and I Quit are cute, but I kind of like Axit because it's as close to Brexit.
John:
And the Axe thing, like the picture has like an EU-themed Superman holding an Axe and an Apple.
John:
And yes, it does look AI-generated.
John:
It's interesting how...
John:
Due to the way the various AI models that we're familiar with have been trained, most people can now look at an image and identify it immediately as AI generated based on like the shading and the weirdness of hands and all sorts of other stuff.
John:
It is kind of strange how quickly that happened.
John:
But anyway.
John:
I kind of like Axit, but I don't think we get to pick this name.
John:
So, I mean, MacBook One didn't really catch on, and neither did MacBook Adorable.
Casey:
Oh, please, it sure did.
John:
Well, within our little circle of podcasts, yes, but I don't see the New York Times running with Axit or I Quit.
Marco:
Yeah, we don't really seem to have naming power in the greater ecosystem.
Casey:
If we try hard enough, we can make Fetch happen.
Casey:
All right.
Casey:
Someone anonymously wrote in with regard to CarPlay Audio.
Casey:
We were wondering how CarPlay Audio worked, especially with the new CarPlay, and whether or not it was more like AirPlay 2, where it sends a big buffer or whatnot.
Casey:
And so Anonymous writes, audio and wireless CarPlay is always over Wi-Fi.
Casey:
Buffered audio for CarPlay is basically AirPlay 2.
Casey:
Buffered audio is available without doing next-gen CarPlay.
Marco:
Yeah, this was news to me because I speculated that it seemed like all CarPlay audio was always going over Bluetooth.
Marco:
Wireless CarPlay, I think, actually creates like a little ad hoc Wi-Fi network between the car and the phone.
Marco:
And wired CarPlay sends all that stuff over the wire.
Marco:
And I kind of assumed wired CarPlay, it seemed like it does audio and video over the wire.
Marco:
Wireless, it seemed like it was doing Wi-Fi for the audio – or for the video signal, rather, but Bluetooth for the audio.
Marco:
And apparently, this person wrote in who I think would know such things, and they said, nope, it's always over Wi-Fi.
Marco:
So that, to me, first of all, is kind of good news in the sense that, like, you can have –
Marco:
improve responsiveness, you can have better reliability for the audio because it's already going over Wi-Fi, and so you can do all that with current CarPlay tech.
Marco:
You don't have to use the new CarPlay system.
Marco:
The kind of sad and frustrating part is...
Marco:
then why do wireless CarPlay implementations out there in the world so often have just massively long buffers that make it really laggy and annoying?
Marco:
That's frustrating.
Casey:
All right, Kirk Northrup points us to a New York Times article with regard to using AirPods Pro as hearing protection.
Casey:
This is kind of a lot to read, but I think it's worth it because this really distills down the summary, and we'll put a link in the show notes if you want to read it for yourself.
Casey:
Reading from the article...
Casey:
As you can see in the results, any claim that the AirPods Pros adapt to transparency or hear-through mode limits sound to 85 decibels does not prove true in our testing.
Casey:
The earbuds did bring the 105 decibel sound down to 95 decibels, which is a big improvement over using no hearing protection at all.
Casey:
But that's adequate for only about 45 minutes of exposure under our simulated conditions.
Casey:
Keep in mind that noise guidelines are designed with the assumption that a person who has no other loud noise exposure throughout the day, if you were previously exposed to loud noise levels to your work or hobbies, you would likely want to be even more careful when attending a concert on the same day.
Casey:
The Hear Through Mode and the Bose QuietComfort Earbuds 2, which Bose calls the Aware Mode, did a little better in our tests, limiting the sound to 91 decibels, a level of volume reduction that might be adequate for a two-hour concert.
Casey:
As we swapped the earbuds for earplugs and switched back and forth between the earbuds Hear Through and noise-canceling modes, we were surprised to hear how much more enjoyable the show was when we used the AirPods Pro earbuds as hearing protection.
Casey:
Using the AirPods Pro's adaptive transparency gave us, in essence, a quieter version of the unattenuated live sound.
Casey:
The guitars, drums, and vocals all sounded surprisingly clear, and our enjoyment of the sound wasn't lessened at all.
Casey:
However, as our measurements predicted, it was still too loud.
Casey:
After about 10 minutes of listening, our ears grew fatigued.
Marco:
Yeah, this is interesting.
Marco:
So what the Wirecutter did was run a... Basically, they have a test setup with an artificial ear, basically, that they can put these earbuds into and measure what gets sent through them.
Marco:
I question whether these results are universal because, again, as somebody who's now watched, I think, three concerts, four concerts maybe with AirPods Pro as my earplugs, I know what it feels like to have my ears blown out from a concert and how it feels during and afterwards.
Marco:
And when I use the AirPods Pro, it doesn't feel that way at all.
Marco:
It feels just like using earplugs, which is what I was doing before using the AirPods Pro.
Marco:
When the Apple Watch measures the sound pressure hitting my ears, like when it indicates when you're wearing the AirPods Pro what it's doing, it caps at 85 decibels when they're being used this way.
Marco:
And I have found for whatever it's worth, like I have like an SPL meter because of course I do.
Marco:
And I have found the Apple Watch's sensitivity to be pretty accurate, although obviously it wouldn't be using the watch's built in mic when you have AirPods Pro.
Marco:
And so maybe there's some other factors there.
John:
How does it how would it possibly be measuring the sound on the inside of your ear?
John:
Is there a microphone that's facing the inside of your ear?
Marco:
I think there might be.
Marco:
Isn't that how they do some of the calibration stuff?
Marco:
So anyway, the point is my experience actually using them, it really does not feel like I'm hearing a 95 decibel concert for three hours.
Marco:
It feels like what it says of 85.
John:
Well, how loud was the concert outside of the year?
John:
From your seat, did you look at the decilometer?
John:
If I had nothing on, what would the level be?
Marco:
Yes.
Marco:
So I did a couple of times where I would take the AirPods out and put them away so they turn off and see how the watch measures the concert fully.
Marco:
And I don't remember exactly, but I remember it was somewhere in the high 90s, I think.
Marco:
So not quite as loud as there.
Marco:
So maybe the difference is that they were coming from 105 decibels, and they came down to 95, and I was coming, I think, from somewhere in the 90s down to 85.
Marco:
So maybe that's the cause.
Marco:
Or it could just be differences in fit.
Marco:
I don't know exactly how good is the seal with their artificial ear setup compared to my actual ear.
Marco:
I don't know.
Marco:
There's no good way to know that.
Marco:
So I think the conclusion to draw here is...
Marco:
First of all, what we kind of already knew, which is they provide some protection, suitable for occasional concert goers, not suitable if you're going to be working in a factory every single day.
Marco:
There's different degrees of protection that you might need.
Marco:
This is not everyday protection.
Marco:
But also, it probably varies a little bit between both fit and between what exactly you're actually listening to.
Marco:
How loud is your environment?
Marco:
Yeah.
Marco:
Maybe it can't bring down 105 decibels, but maybe it can bring down 95 decibels.
Marco:
So obviously there are other variables here.
Marco:
So I think the advice that I would give remains the same, which is if you have really serious hearing protection needs or very frequent hearing protection needs, get real hearing protection.
Marco:
If you are an occasional concert goer like me and you want basic hearing protection for occasional concerts, this is probably fine unless you are standing like directly next to the giant PA speaker.
Marco:
Maybe you might need a little bit more protection.
Marco:
But this seems fine to me.
Marco:
And every time I've used them, I feel great afterwards and my ears don't ring at all and there's no fatigue.
Marco:
So like it seems to be working.
Marco:
So maybe it just has a limit to how much it can work.
Casey:
Apple is apparently using Google Cloud Infrastructure to train and serve AI.
Casey:
This is from HPC Wire.
Casey:
Apple has two new homegrown AI models, including a 3 billion parameter model for on-device AI and a larger LLM for servers with resources to answer more queries.
Casey:
The ML models developed with TensorFlow were trained on Google's TPU.
Casey:
John, remind me what TPU stands for.
John:
Tensor processing unit, something like that.
John:
We talked about the actual hardware on a past show and how many...
John:
Billions of computations or whatever they do and how many different operands are in each operation But yeah, I think it's like a tensor processing using unit or something.
John:
It's basically So Google doesn't buy its GPUs from Nvidia and put them it makes its own silicon to do machine learning It has for many many years.
John:
It's not a new thing They're called TPUs and that's what they're currently using to drain to train Gemini and stuff and if you pay them and
John:
Just like you pay AWS or whatever, you pay Google Cloud, I believe they will rent you their TPUs and you can train your models on it.
John:
And that's what Apple did.
Casey:
Indeed.
Casey:
Apple's AX Learn AI framework used to train the homegrown LLMs creates Docker containers that are authenticated to run on the GCP or Google Cloud something.
Casey:
What is that?
Casey:
Google Cloud?
John:
Computing?
John:
I don't know.
John:
Computers?
John:
GCP is like AWS.
John:
It's Amazon Web Services, but Google.
Casey:
Anyway, to run on the GCP infrastructure, AX Learn supports the Bastion Orchestrator, which is supported only by Google Cloud.
Casey:
This is a quote from their GitHub documentation.
Casey:
While the Bastion currently only supports Google Cloud Platform – there you go, I should have kept reading, my bad – Google Cloud Platform Jobs, its design is cloud-agnostic, and in theory it can be extended to run on other cloud providers, Apple stated on its AX Learn infrastructure page on GitHub.
John:
yeah so this is i mean the we didn't put this in the notes but the the rumors are that the deal between apple and google to use gemini as part of ios 18 as an option alongside a chat gpt that deal is reportedly getting closer but this is from the past of like hey apple's got these models the one that's going to be running on people's phones or the various ones that are running on their phones which are smaller
John:
And the big ones, they're going to be running on their private cloud compute.
John:
And these are Apple's own models, and they train them themselves.
John:
And how did they train them?
John:
They paid Google to use TPUs to train their models.
John:
And so I feel like this is interesting in that Google, you know, Apple's...
John:
unfriendly relationship let's say with nvidia continues all right and and their friendly relationship with google continues it's kind of a surprise that google that didn't do the deal maybe uh you know the rumors are i think we talked about this on a past show that uh nobody's paying anybody for the open ai thing whereas maybe google wanted to be paid so we'll see how this works out but uh
John:
Yeah, there seems to be a cozy relationship between Apple and Google because apparently Apple either doesn't have yet or doesn't plan to have fleets of massively parallel machine learning silicon that they can train their models on.
John:
But Google does.
Marco:
We are brought to you this episode by Photon Camera, the ultimate manual shooting app for iPhone photography enthusiasts.
Marco:
Whether you're a seasoned pro or just getting started with photography, Photon Camera is designed to give you all the tools you need to elevate your iPhone camera experience to new heights.
Marco:
Photon Camera is a beautifully designed, easy to use manual camera app for iPhone, perfect for both beginners and professionals.
Marco:
We'll be right back.
Marco:
Both Photon Studio and Photon Enhance are included free with your Photon Camera subscription.
Marco:
And here, of course, is the best part.
Marco:
For our listeners, Photon Camera is offering an exclusive deal.
Marco:
You can get up to 50% off your first year by visiting photon.cam.com.
Marco:
Go there, photon.cam.atp to claim your discount and start exploring the power of manual photography on your iPhone today.
Marco:
Thank you so much to Photon Camera for sponsoring our show.
Casey:
John, I hear that you have asked Apple for help, and they have said, you know what you need?
Casey:
You need a Mac Studio.
Casey:
Because why would anyone need a Mac Pro?
John:
This went around, I think, a week or two ago.
John:
Apple's got a page.
John:
Apple.com slash Mac slash best hyphen Mac.
John:
And the title of the page is Help Me Choose.
John:
Answer a few questions to find the best Mac for you.
John:
And when this was going around, the first thing I did was launch this page.
John:
And I wanted to go through the little wizard and answer a bunch of questions to see if I could reach the win condition, which is having this tool recommend the Mac Pro.
Casey:
Is that the win condition?
Casey:
It is the win condition.
John:
are you sure and the answer was very clear and i was mostly telling the truth but occasionally i would you know exaggerate to make sure i go on the mac pro path uh and i did not end up at a mac pro it recommended a mac studio to me and that a bunch of other people pride so a bunch of people tried to use this tool to get mac pro nobody could do it and julio montier tried it and found out how to cheat to win the game uh if you look at the source code
John:
You can see that there's like a JSON file that defines the options for the endpoints.
John:
And that JSON does not contain the Mac Pro.
John:
It contains pretty much every other Mac that Apple sells.
John:
But there is no way to get to the Mac Pro because the Mac Pro is not one of the options.
John:
that's weird is it no this is apple telling you that no literally nobody wants this computer and nobody should have it we all agree on this show that the current mac pro is not a great computer but it is a computer that exists and on top of that there there is at least one very specific reason why someone might want to use it if one of the questions had asked hey do you have a bunch of pci express cards that you need to use
John:
If the answer to that is yes, it's literally the only computer Apple sells that you can do that on.
John:
And that is really the only thing to recommend.
Marco:
Do you think the people who made this quiz know what a PCI Express card is?
John:
I mean, it's Apple.
John:
Like they have questions and answers for every other computer.
John:
It just seems weird to me.
John:
Now, again, I can understand saying, well, this is not a great computer.
John:
And really, honestly, no one should really buy it.
John:
Like I agree with all of that.
John:
But when you make a help me choose tool on your website,
John:
You should have all of the things as endpoints.
John:
And yeah, make the Mac Pro pretty much impossible to get to unless you need it.
John:
But there is a reason someone might need it.
John:
If someone is going through this tool and saying, I don't know what I'm going to do.
John:
I've got all these audio cards that I need to use for my old Mac is dying.
John:
Is there some other computer that I can use?
John:
How would you determine?
John:
That, uh, that Apple still sells computers with card slots in them.
John:
Uh, everyone on, um, last time I was saying, okay, well, the people who need the Mac pro know it.
John:
And so they don't need to use this tool.
John:
That's not how these tools work.
John:
You could say the same thing about, well, the people who need an iMac know they want an all in one thing.
John:
So they don't need to use this tool.
John:
If you already know which computer you need, yes, you don't need this tool, but the tool exists to lead you to whichever product that Apple sells is best suited for you.
John:
And it's weird to leave just one out.
John:
And I would just love to know the thinking behind that process.
John:
Look, if Apple doesn't want to sell them, don't sell them, right?
John:
But they're selling them.
John:
You can buy them for a huge amount of money.
John:
And the tool can make it difficult or almost impossible to get there because when it says, how many PCI Express cards do you need to use?
John:
the default choice should be zero or i don't know what a pci express card is like have a million options that regular people will click and they will lead them off that path and say you shouldn't buy this but if the person says three or any number other than zero you have to lead them to the mac pro because this is literally the only computer they sell with card slots
Marco:
I mean, you're going to hate this, but so I did the whole quiz trying to get to the Mac Pro before you said it wasn't an option.
Marco:
And just putting in all the highest requirements.
Marco:
I do 3D editing and content creation and video editing and audio editing.
Marco:
I need all these tools.
Marco:
I need to connect a bunch of stuff to my Mac.
Marco:
And it recommended exactly what I'm using right now, the MacBook Pro 16-inch.
Yeah.
Marco:
I thought for sure I'd at least get a Mac Studio, but nope.
John:
Well, no, because the question it asks is, do you do all your work in a single location or do you need to be portable?
John:
Did you say, oh, I do all my work in a single location?
Marco:
I said like on the one desk option, the very top option where it's like I do everything at the same place on a desk.
Marco:
I thought for sure I'd at least get a Mac Studio.
John:
I think a lot of the endpoints recommend two computers.
John:
Like, I didn't just get the Mac Studio.
John:
I got recommended the Mac Studio on the MacBook Pro.
Marco:
Oh, I also got two computers, the MacBook Pro $4,000 configuration and the MacBook Pro $3,500 configuration.
John:
Yeah, I don't know how you didn't end up with desktop because there must have been some question that's differentiating portability.
John:
Obviously, if you mention you ever need to take it somewhere, they're not going to recommend a desktop.
John:
I don't know how great this tool is.
John:
Wizards in general are not great.
John:
I like their comparison ones like for the phones where it does like columns and you can list all the features and scroll and see how they are different from each other.
John:
This doesn't do that.
John:
But I do think it's very strange to not have a single one of your computers in there.
John:
Remember when they were selling the trash can for years and years and really nobody should be buying that, right?
Yes.
John:
But if you needed whatever GPUs it came with, for a while it still did have the most powerful GPUs you could buy in an Apple computer.
John:
And if you needed those GPUs and they had a tool that was asking you a bunch of questions, they should have had a question that said, you know, do you use Maya at Pixar and need this much GPU power?
John:
And then it would lead you to the trash can.
John:
I don't know.
John:
It's weird.
John:
Anyway, if someone at Apple knows why the Mac Pro is omitted from this tool, please tell us.
John:
I'm sure it's the obvious reason, which is like, no one should buy that.
John:
And we kind of agree, but you're selling it.
John:
So put it in the tool.
Marco:
I'm pretty sure it's very clear why it's omitted.
Marco:
Even the very first day this Mac Pro came out, nobody should be buying it.
Marco:
Like, let alone now.
John:
Yeah, I mean, like, it's not nobody.
John:
Like, it is the only computer with slots.
John:
Like, that's not a great reason for it to exist.
John:
And it's not a reason for you to pay twice as much as a Mac Studio.
John:
But like, especially since they don't support, I believe they don't support at all anymore.
John:
The, you know, PCI Express breakout boxes like they used to on the Intel things.
John:
It's literally your only choice if you have cards.
John:
And that's one of the reasons they should continue to make it and do continue to make it.
John:
And they just never ask about that.
Casey:
Yeah, it made me laugh quite a bit that nobody was coming up with that pro.
Casey:
I don't know.
Casey:
Maybe that's a feature, not a bug.
Casey:
I'm just saying.
Casey:
All right.
Casey:
For the main, main topic this week, for your main course, we have a plethora of different AI-related topics.
Casey:
And I'm going to try to take us on a journey.
Casey:
We'll probably fail, and that's okay.
Casey:
But basically, this next section is AI.
Casey:
Yeah.
Casey:
huh, that's a thing, isn't it?
Casey:
And so we start on the 17th of June, for what it's worth, with our friend John Voorhees at Mac Stories, which is them saying, hey, the article is entitled, How We're Trying to Protect Mac Stories from AI Bots and Web Crawlers, and How You Can Too.
Casey:
And it seems like both John and Federico are...
Casey:
getting very wrapped around the axle with regard to AI stuff.
Casey:
And I'm not saying, I don't mean to imply that they're wrong or that's bad, but they are getting ever more perturbed about what's going on with AI crawlers.
Casey:
And I mean, to a degree, I get it.
Casey:
So that was on the 17th of June.
Casey:
John says, here's how you can protect yourself from crawling.
Casey:
And then on the 21st of June, Business Insider writes and says, oh, huh, OpenAI and Anthropic seem to be ignoring robots.txt.
Casey:
And if you're not familiar,
Casey:
If you have a webpage or website, I guess I should say, where you control the entire domain, you can put a file called robots.txt at the root of the domain.
Casey:
So, you know, it would be marco.org slash robots.txt.
Casey:
And any self-respecting and ethically clear crawler will start crawling marco.org or whatever the case may be.
Casey:
By attempting to load robots.txt and seeing if there's anything there.
Casey:
And if so, there's a mechanism, a schema, if you will, by which the robots.txt will dictate who or really what crawlers should or should not be allowed to crawl that site.
John:
And it's by path.
John:
They can say everything in this directory, you shouldn't crawl.
John:
Everything here, you can crawl.
John:
So you can sort of subdivide your site to say which parts are accessible.
Marco:
Yeah, and I have thoughts on that, but we'll come back to that.
Casey:
Yeah, I mean, whenever you're ready to interrupt, to be honest, feel free.
Marco:
Okay, let's talk about robots.txt.
Casey:
Well, just actually very quickly, I apologize.
Casey:
I gave you the green light, now I'm giving you the yellow light.
Casey:
Just very quickly... I was already in the intersection.
Casey:
It's important to note that robots.txt has never been enforced in any meaningful way.
Casey:
It's been kind of a friendly agreement amongst...
John:
pretty much the entire world wide web but there's never been any real um wood behind the arrow or whatever the turn of phrase what we call advisory yeah like advisory locking it is a scheme that people who agree to that scheme can use that scheme to collaborate and work together but there is no actual mechanism stopping anyone from doing anything it is literally just a text file that you can choose to read or not right so with that said marco carry on
Marco:
Yeah, and so robots.txt is basically a courtesy.
Marco:
It is a website saying, please maybe follow these rules if you would.
Marco:
But it is not a legal contract.
Marco:
It is not a legal restriction.
Marco:
It is not technically enforced or enforceable, really.
Marco:
It is also not universally used and respected.
Marco:
And I can tell you, I operate crawlers of a sort, and I don't use robots.txt.
Marco:
So when Overcast crawls podcast feeds, I don't even check for robots.txt.
Marco:
I just crawl the URL as the users have entered them or as they have submitted them to iTunes slash Apple Podcasts.
Marco:
Yeah.
Marco:
And you can just click that next month, next month, next month button forever if you want to.
Marco:
And so a web crawler that like, you know, indexes a page and then follows every link on that page.
Marco:
If it's hitting like a web calendar, it can generate basically infinite links as it goes forward or backwards in time.
Marco:
So the main purpose of robots.txt was to kind of advise search engines.
Marco:
And it was specifically for search engines.
Marco:
It was to advise them areas of the site that crawlers should not crawl, mostly for technical reasons, occasionally for some kind of privacy or restriction reasons, but usually it was just technical, like, hey, don't get into an infinite loop.
Marco:
Which was largely unnecessary, because the web crawlers eventually kind of figured out how to limit things on certain sites, and they eventually made themselves more advanced, and that wasn't really necessary anymore, even for that case.
Marco:
What?
John:
I think the primary use case was keep this out of your search index.
John:
Like I think I mean, and any decent crawler is not going to get into an infinite loop, but keep this out of your search index was, you know, and it was, and it was respected by the popular search engines of the day and still is.
John:
I think Google still reads robots.net text and still respects it.
Marco:
But the thing is, the whole idea of, well, I don't want any bot to crawl this, it was so based on assumptions about search engines in particular, web search engines.
Marco:
The current drama around trying to apply it to AI training, I think it's missing a lot of that context, that when this kind of unofficial standard was developed...
Marco:
it was all about web search engines.
Marco:
And when you think about how the web search engine dynamic has always worked with web publishers, there was never really any official contract between anybody that said, like, hey, Google, Bing, all the other search engines that have come and gone over the years...
Marco:
crawl my page, go ahead, index it, go ahead, even though technically that is making a copy in your server's memory and might be some kind of copyright violation, doesn't really matter because the purpose of this is going to help me.
Marco:
It's going to make people able to find my page through your search engine and will direct people to my page and I will be able to have them there, make money, maybe have them subscribe to my site in their browser or whatever.
Marco:
So there was that implied symbiotic trade-off that, okay, I actually, as a site owner, I want search engines to mostly to index my site because I want people to be directed to my site from the search engine.
Marco:
And so robots.txt was entirely in that context.
Marco:
It was never anything that was some kind of like legal contract that said, you must obey my rules.
Marco:
That really has never been tested until fairly recently.
Marco:
Like that...
Marco:
That was never really something that really ever came up.
Marco:
I mean, there have been a couple of things here and there with, like, Google News and news publishers in certain countries and stuff.
Marco:
But, like, for the most part, the basic idea of robots.txt was really just, please.
Marco:
Like, that's it.
Marco:
It was like...
Marco:
please do this or don't do this.
Marco:
And even then, like, it was often used in ways that harmed the actual customers using things or did things that were unexpected.
Marco:
This is why I don't use it for Overcast's feed crawlers because if you publish an RSS feed and submit it to Apple Podcasts,
Marco:
I'm pretty sure you intend for that to be a public feed.
Marco:
And so I feel like it is not really my place to then put up an alert to my user to say, hey, this person's robots.txt file actually says disallow star on this one path that this feed is in, and so I actually can't do this for you.
Marco:
That would feel like...
Marco:
I would have, first of all, no incentive to do that.
Marco:
And second of all, because of its intention and context as a standard for search engine, which I'm not, this doesn't really apply to me and my use.
Marco:
And there were all sorts of things over the years, too, like, you know, you could specify certain user agents, like, all right, Googlebot, do this.
Marco:
Yahoobot, do this.
Marco:
Like,
Marco:
And that was also problematic over the years, too, because it disadvantaged certain companies if you just had, like, bad behavior once.
Marco:
Or if a site owner just had, like, one bad thought about one of these companies once and then, like, never revisited it or whatever, then that company was allegedly, like, disallowed from crawling the site.
Marco:
Why?
John:
Well, I mean, it's not even that.
John:
It's like, you know, for people that know the technology behind it –
John:
Don't allow Googlebot.
John:
The way you identify Googlebot is by the user agent string, which is part of the HTTP request, and anybody can write anything there.
John:
And so all someone had to do was say, I'm Googlebot, and then just write a script that slams a site.
John:
And people are like, oh my god, my site's being slammed by Googlebot.
John:
No, it's not.
John:
It's being slammed by a thing that put that string into the user agent header.
John:
Like, it's just, there's no security, no authentication, and it's like email.
John:
And people forget this about email all the time.
John:
It's a real email from Santa at North Pole.
John:
Anybody can write anything.
John:
I know in email there are technologies to try to make this better, but with HTTP headers, the user agent string,
John:
There's no security behind that.
John:
So if you're making any decision based on the user agent, whether it's a decision to allow something with a particular user agent string or disallow something, or you try to make decisions about, oh, I'm getting all these hits, and I look at the user agent string, and the user agent string is X, therefore it must be Google, therefore Google is bad, you have no idea.
John:
who or what that is especially with like proxies and things bouncing around the web or whatever so just like robots.txt it's all just sort of a politeness agreement and convention that only works when all parties involved are being honest and acting in good faith and that is not something that is true broadly speaking on the web right and and when you're looking at like you know legalities or copyright issues
Marco:
As I was saying earlier, none of this has really ever been tested because the way it was being used, the deal between search engines and publishers was mutually beneficial.
Marco:
And publishers, for the most part, who were not bad business people, for the most part, publishers really wanted for search engines to index their public content.
Marco:
And their private content shouldn't be accessible to the crawlers.
Marco:
It shouldn't be exposed to the public internet if they want it to be private.
Marco:
And so using robots.txt to try to say, I want you to only use the content on my site for this purpose, but not that purpose.
Marco:
But I'm going to keep serving it publicly and making it available to any bot that comes around publicly.
Marco:
You just have to maybe be polite about it.
Marco:
I feel like this is the wrong tool for that job.
Marco:
That job is more of a legal question.
Marco:
Like right now, again, like we haven't really had much of an agreement between publishers and search engines and other, you know, big aggregators before.
John:
I think there have been legal cases about it, especially in the early days, because in the early days of the search engine, the idea that you were to go to a website that's not yours and type in a search string and see text that came from your website on someone else's website on the Google search results page, but I believe there were legal cases about that.
John:
And I think the result was that Google is allowed to run a search engine.
John:
And some of that can be considered fair use.
John:
Especially the old style search engines where what you'd see is a series of links that are search results and maybe a summary below them.
John:
Before Google started doing the thing where it's like, actually, I'm just going to give you an entirely unattributed snippet at the top of the page that tries to give you the answer you were looking for without sending you to any site.
John:
And of course, that snippet is now powered by their large language models.
John:
But before it wasn't.
John:
um that is still up for grabs and we'll talk about that in a little bit but the basic idea of a search engine that indexes the web and allows you to get links to the things that it has indexed i believe actually has been tested in court and either way whether or not it has been tested in court in your country or in the u.s or whatever practically speaking
John:
i don't think there are many as much disagreement about the utility of that people like having traffic sent to them by google there's you know arguments of google being too big and there should be competition in the search space but the concept conceptually a web search engine i think we all agree is a good thing that is necessary and should exist and helps everybody
Marco:
Sure.
Marco:
But if you're going to start making qualifications of like, all right, well, here's how you have to use my content or not use my content.
Marco:
Robots.txt is not the way to do that.
Marco:
That is not any kind of legal binding.
Marco:
That is not any kind of technical restriction.
Marco:
I would even question whether it's even a good idea to even still have those files these days and to expect anything from them.
John:
Well, I mean, I think what people are expecting is we'll read this thing from the perplexity CEO in a second.
John:
But like, I think what people are expecting is for the ostensibly good faith actors to do what the existing ones do.
John:
Google honors robot.tech.
John:
And so do the other things.
John:
Apple honors it with their Apple bot thing.
John:
So do the other things that crawl the web.
John:
Right.
John:
Right.
John:
Nobody has to follow it, but the good faith actors do.
John:
And so I think most of the pushback here is, hey, I thought you weren't just a random fly-by-night company or a bunch of script kitties or whatever.
John:
I thought you were a big, important, serious company.
John:
And you have a crawler that crawls the web.
John:
And you should use a user agent that looks like you, right?
John:
And we won't ban your user agent when someone fakes it and spams our site with it.
John:
But we'll just say, here are the rules for you.
John:
You can't crawl these URLs.
John:
You can't crawl any of our URLs.
John:
You can crawl these or whatever.
John:
I think this is a reasonable tool for that job, provided you understand that the tool only works
John:
If the people on the other end agree and say, yes, we will honor your robots.txt, then I think part of the anger is the AI companies are not behaving the way the search engine companies did.
John:
And that's the pushback.
Marco:
We are brought to you this episode by 1Password Extended Access Management.
Marco:
Imagine your company's security like the quad of a college campus.
Marco:
There are nice brick paths between the buildings.
Marco:
Those are the company owned devices, IT approved apps, and managed employee identities.
Marco:
And then there are the paths people actually use, the shortcuts through the grass.
Marco:
Those are unmanaged devices, shadow IT apps, and non-employee identities like contractors.
Marco:
Most security tools only work on those happy brick official paths.
Marco:
But a lot of security problems take place on the shortcuts.
Marco:
1Password Extended Access Management is the first security solution that brings all of these unmanaged devices, apps, and identities under your control.
Marco:
It ensures that every user credential is strong and protected, every device is known and healthy, and every app is visible.
Marco:
1Password Extended Access Management solves the problems traditional IAM and MDM can't touch.
Marco:
It's security for the way we really work today.
Marco:
It's available now to companies with Okta, and it's coming later this year to Google Workspace and Microsoft Entra.
Marco:
Check it out at 1Password.com slash XAM for Extended Access Management.
Marco:
So once again, 1Password.com slash XAM.
Marco:
Thank you so much to 1Password Extended Access Management for sponsoring our show.
Awesome.
Casey:
So we have a roundup from Michael Sai that we'll link in the show notes that talks about all this.
Casey:
And then on the same day, on the 21st of June, Perplexity CEO Eravind Serenivas responds to plagiarism and infringement accusations.
Casey:
So this is a post on Fast Company, and Eravind says, and this is a quote, we don't just rely on our own web crawlers.
Casey:
We rely on third-party web crawlers as well.
Casey:
So it's not my fault.
Casey:
They did it.
Casey:
Right?
Casey:
Over there.
Casey:
Well, who is over there?
Casey:
I don't know.
Casey:
It's the people over there.
Casey:
So reading from the Post, and this is a direct quote from the Post, but not from Aravind.
Casey:
Srinivas said, the mysterious web crawler that Wired identified was not owned by Perplexity, but by a third-party provider of web crawling and indexing services.
Casey:
srinivas would not say the name of the third party providers citing a non-disclosure agreement asked if perplexity immediately called the third party crawler to tell them to stop crawling wired content srinivas was non-committal quote it's complicated he said what is this facebook is it 10 20 years ago anyways srinivas also noted that the robot exclusion protocol in other words robots.txt which was first proposed in 1994 is quote not a legal framework quote
Casey:
He suggested that the emergence of AI requires a new kind of working relationship between content creators or publishers and sites like his.
John:
So this is actually something that a bunch of AI CEOs and other bigwigs have been doing is basically saying, oh, well, don't ask us.
John:
We outsource that.
John:
And it's like, come on.
John:
This is like CEO 101.
John:
Yeah, you outsource lots of things, right?
John:
But in the end...
John:
It's your company, whatever you're, you know, it's like, you know, if you say, oh, we outsourced it and they're doing something they shouldn't.
John:
It's like saying, you know, we outsource to some company that makes our bread for us at our sandwich shop and the bread's coming back with shards of glass in it.
John:
You don't say, well, don't blame us.
John:
We outsource it.
John:
Tell your bread maker not to put glass in the bread.
John:
Right.
John:
Or are you saying you explicitly said it's OK if you put a little glass in the bread?
John:
You have to take responsibility.
John:
Right.
John:
And so this is not just the perplexity.
John:
I've seen like three or four stories where AI CEO says, oh, that's not us.
John:
That's that's a subcontractor.
John:
So we don't have any control of that.
John:
It's like, wait, what?
John:
Like, just own it.
John:
Just say, we've decided we're not going to honor robots.txt.
John:
Because everyone knows you're not doing it, and you can't try to blame it on a third-party thing or whatever, and then defend that.
John:
And that's kind of where they go into it.
John:
It's not a legal framework, blah, blah, blah.
John:
And like I said, I think the pushback is not like, you know, they're legally required to do this or whatever.
John:
It's just that, like, we thought you were going to behave like a search engine, and you were going to be a...
John:
a polite member of web society.
John:
And it's clear that because you're like an AI startup, you're like, yeehaw, cowboy time.
John:
It's a wild west.
John:
You can't fence me in.
John:
We're not acting like Google because we don't have to.
John:
So tough luck.
John:
And so people are mad at the companies, right?
John:
There's no legal argument here.
John:
There's no like, it's just a decision that they're making.
John:
And by the way, Marco, on your decision not to do it, I would say the closest analog of Overcast is that you're a web browser.
John:
Web browsers don't honor robot.tech state.
John:
If you type a URL into the address bar of your web browser, you expect it to load that page.
John:
You don't expect it to load robots.txt and say, oh, this site says I'm not supposed to load this page.
John:
That's not what robots.txt is for.
John:
So if you are a web client used by an individual user, like a user loads an RSS feed in a podcast,
John:
That is a single person using a client application to browse the web, you know, to get an RSS feed, right?
John:
That is very different than an automated crawler that is crawling all over the entire web and following links, right?
John:
That's what robots.txt is for robots.
John:
Unless you are literally a robot while using Overcast, which I don't think you are.
John:
Overcast should not look at robots.txt because it's not a robot and it's not being used by a robot.
John:
I have a whole podcast about this.
Marco:
Well, but... But if you look at what Perplexity is doing, it's, I think, a lot closer to a browser than a search index.
John:
Well, so Perplexity's business is complicated because it's a whole question of like...
John:
Do people go there to get links out to other places or do they go there to get the answer that you attempt to attribute?
John:
And I think people will get angry with perplexity when they provide an answer, but then don't say where this answer came from.
John:
And even if they do say where this answer came from, they're like, you provided too much content.
John:
This is the same problem people are beginning to have with Google is like, you're supposed to be sending me traffic.
John:
You're not supposed to be removing traffic, right?
John:
By...
John:
either giving an answer that's synthesized from a website and not telling them the source or basically like inlining the entire inlining my entire web page for example and saying you don't need to go to that website here's the whole page right here that is not kind of what people expect out of a search engine so but perplexity is so new and so young and they haven't quite figured this out but just at the crawling stage so people who are seeing their their website crawled and they're
John:
Going to the Herplexity's service and saying, oh, I can find my content there, and I put you in robotsitext and you shouldn't be crawling in this Herplexity is like, we don't have to look at robotsitext because that's just an advisory thing, and we've chosen to ignore it.
John:
And so people are angry.
John:
That's what it boils down to.
Marco:
Well, and I think there's going to continue to be more and more applications over time of technologies like AI summarization and action models and things like that where some fancy bot basically is going to be browsing and operating a web page on behalf of a user.
Marco:
That is kind of like a browser, but it's a very different form that I think breaks all those assumptions with publishers.
Marco:
Like this is one thing that I faced when I was making Instapaper a thousand years ago.
Marco:
You know, Instapaper would save the text of a web page to read later and only the text, not like all the ads and the images and everything like that.
Marco:
I was very careful, though, to not make features that would enable somebody to get the text of a page without having first viewed the page in a browser or a browser-like context.
Marco:
So it would load the whole page.
Marco:
They would see the page.
Marco:
If there were ads, those ads would load on the page.
Marco:
They would see those ads.
Marco:
And then they could save what they were seeing, and then part of that would be saved to Instapaper and shown to them later.
Marco:
And that was always a very tense balance to try to maintain because what I didn't want was widespread scraping of people's text without loading their ads, but I figured that seemed like an okay tradeoff because that was literally just saving what was already sent to the browser and what the user was already looking at.
Marco:
But a lot of these new technologies – first of all, I probably wouldn't attempt that today.
Marco:
But a lot of these new technologies, I think, break a lot of those little details.
Marco:
Like if you have some kind of bot that's doing something on a website that's like – suppose it's one of these action models where you're saying, all right, book me a flight.
Marco:
Yeah.
Marco:
this stupid like book me a trip thing that all of these ai demos from these big companies keep trying to do even though nobody ever wants that suppose you you have a book me a trip kind of thing and with an ai model and the idea is that model will go behind the scenes and will you know go operate xpedia or orbits behind the scenes for you and manipulate things back there to find the best flights and hotels whatever else
Marco:
Well, those sites make some of their money via ads and affiliate things and sponsor placements on those pages.
Marco:
If you have some bot operating the site for you, kind of clicking links for you behind the scenes in some kind of AI context, that bot is not going to see those ads.
Marco:
It's not going to click those affiliate links.
Marco:
It's not going to pick the sponsor listing.
Marco:
It's going to just kind of get the raw data, and that's it.
Marco:
And that will be violating those sites' business models if that happened.
Marco:
That really has not happened at massive scale until fairly recently.
Marco:
So this really has not been challenged.
Marco:
This really has not been legally tested that much.
Marco:
This really has not been worked out.
Marco:
Like, what are the standards?
Marco:
What are the laws?
Marco:
What are the legal precedents?
Marco:
How much of this is fair use versus not?
Marco:
For the most part, until very recently, we could pretty much just say, all right, if you serve something publicly via public URLs and anybody can just download it, then nothing bad would really happen to you and your business model for the most part if some bot came by sometimes and parsed that page for some other purpose.
Marco:
It wasn't a big deal.
Marco:
But now there's a pretty significant difference in scale and type of replacement.
Marco:
Now, with a lot of these AI products and with Google search itself, you know, increasing over time and then more recently rapidly increasing, what we're seeing now is full out replacement of the need for the user to ever look at that page.
Marco:
That's a pretty big difference.
Marco:
And it's really bad for web publishers and kind of, you know, then consequently really bad for the web in general.
Marco:
We have a pretty serious set of challenges on the web already.
Marco:
Even before this new wave of LLMs came by to further destroy the web, we already had a pretty bad situation for web publishers for lots of other reasons over the years.
Marco:
To have something that removes the need for many people to visit a page at all, that is going to crush publishers.
Marco:
And so it does make sense why everyone's freaking out about this.
Marco:
It makes a lot of sense.
Marco:
I do caution people, though, I don't think it's a very good business move or a very good technology move to say, I'm going to just block AI from being able to see any of my stuff.
Marco:
Because that's a pretty big hammer, and that's a pretty big blanket statement.
John:
And you can't actually block them anyway.
John:
Like that's when it comes down to technically speaking, you you can't you literally can't stop them.
John:
Right.
John:
Unless you stop everyone from viewing your website, in which case you don't have a website.
Marco:
Right.
Marco:
So I think it is it is wise to focus on trying to prevent uses of your content that remove the need to visit your page, because that is a direct attack on your business model.
Marco:
That makes a lot of sense.
Marco:
I don't think it's wise to say I don't want any AI training or any AI visibility of my page.
Marco:
That, I think, is probably short-sighted and probably a bit too much of a blanket statement.
Marco:
I don't think it's good for any party involved to have that kind of blanket ban on it.
John:
What people want, though, the publishers in particular want is they want an ecosystem of members who do agree to some rules of politeness and say, look, we should agree on a system that lets me tell you that you shouldn't do X, Y, and Z on my site, and you should agree to it, and we'll feel better about you if you do that.
John:
Part of the reason I think Instapaper, your example, was not a particularly big problem is, like you said, scale.
John:
And anything with AI in the name these days, people flip out about it and think, this is going to be as big as Google.
John:
Instapaper was not as big as Google.
John:
It did not have billions and billions and billions of users.
John:
If it did, if Instapaper had Google scale, I bet there would have been a hell of a lot more scrutiny on even the very conservative things that you did.
John:
But because it was small, it's not a big deal.
John:
Like that's that's part of the sort of the ecosystem of the Web is there's all sorts of small things that don't have particularly big scale.
John:
They're doing all sorts of weird stuff.
John:
Nobody cares about them.
John:
We allow them to exist.
John:
It's fine.
John:
But now these big names in AI, AI is the next big thing.
John:
You're an AI company.
John:
You have a lot of funding.
John:
Everyone looks at them and thinks that could be the next Google.
John:
That could be the next thing with billions and billions of users.
John:
So we better take whatever weird stuff they're doing way more seriously than we would take Overcast.
John:
And even with Google, the current...
John:
giant in the the world of search and they're you know trying to replace sites and giving answers on the side or whatever uh neil patel coined a term uh i think it was his about this called google zero which is the point at which uh publisher websites get zero traffic from google search right because it's been going down and down over the years because hey you'd write type google search and look
John:
The answer to my question that I typed into Google, it's right on the Google results page.
John:
It's unattributed.
John:
And I don't have to, if it was attributed, I don't have to click on any link to get to it because the answer is right there.
John:
And so Google has been sending less and less traffic to websites.
John:
And Google zero is when you notice, hey, you know what?
John:
You know how much traffic we're getting from Google searches?
John:
Zero.
John:
I don't know if it's absolutely zero for everybody, but it's sure going down.
John:
And it's a scary world to have what was once the massively largest source of your traffic to your website disappear.
John:
But yeah, like whether or not it is wise to exclude to try to to ask to be excluded from pick, you know, whatever, whatever AI crawler thing from whatever open AI perplexity or whatever.
John:
I think most publishers just simply want that choice and to have that choice.
John:
The crawlers need to agree because, again, there is no technical way to stop this short of doing like putting your entire site behind a paywall.
John:
And even that's not going to stop them because they'll just pay and have their crawler go through it.
John:
Like it's the thing about publishing on the web.
John:
You do.
John:
It's like DRM.
John:
You want people to see your movie.
John:
You can't make it impossible to see your movie.
John:
You have to give the viewer an ability to see your movie.
John:
But once you give the viewer the ability to see your movie, they can see your movie.
John:
like but what if they see it but also record it i want them to see it but not be able to see it can i do that and the answer is no right so if you're publishing on the web you have like it's like anything else that's why marco was right to call this a legal thing like things
John:
are published all the time they were published in paper you know like the books or whatever it's like but i can take the book and look at it i can see all the letters in it haha the book is mine well no actually we have laws about the stuff that's in that law that book we have this thing called copyright and even though you can technically read it and you can technically copy it increasingly more easily over time with technology we have laws surrounding it to control what you can do it
John:
And robots.txt, people who think of robots.txt as some kind of like technological bank vault, it's no more of a bank vault than you could put on a book.
John:
Like you do want people to read it and you can't stop them from being able to copy it.
John:
And these days it's really, really easy to copy a book, especially if it's an e-book, right?
John:
Setting aside the whole DRM thing.
John:
What you want is some either...
John:
in a sort of polite society, an agreement among the large parties that actually are significant to get along.
John:
And then failing that, you want laws to provide whatever protections you think are due to you.
John:
And yeah, the Google search stuff has, I feel like, been hashed out probably in the Altivista days, but who knows.
John:
The AI stuff has not yet been hashed out.
John:
And so moving on to this next one, because we have a lot of these items, Microsoft, at least someone in Microsoft, has a very interesting notion of
John:
of what the deal is on the web and potentially what the law should be surrounding it.
Casey:
So this is a post on The Verge by Sean Hollister, who writes, Microsoft AI boss Mustafa Suleiman incorrectly believes that the moment you publish anything on the open web, it becomes, quote-unquote, freeware that anyone can freely copy and use.
Casey:
When CNBC's Andrew Ross Sorkin asked him whether AI companies have effectively stolen the world's IP, Mustafa said,
Casey:
I think that with respect to content that's already on the open web, the social contract of that content since the 90s has been that it is fair use.
Casey:
Anyone can copy it, recreate it, reproduce with – sorry, recreate with it, reproduce with it.
Casey:
That has been freeware, if you like, and that's been the understanding.
Casey:
Microsoft is currently the target of multiple lawsuits alleging that it and OpenAI are stealing copyrighted online stories to train generative AI models, so it may not surprise you to hear a Microsoft exec defended as perfectly legal.
Casey:
I just didn't expect them to be so very publicly and obviously wrong.
Casey:
I'm not a lawyer, writes Sean, and that's also true for me.
Casey:
But I can tell you that the moment you create a work, it is automatically protected by copyright in the U.S.
Casey:
You don't even need to apply for it, and you certainly don't void your rights just by publishing it on the web.
Casey:
In fact, it's so difficult to waive your rights that lawyers have to come up with special web licenses to help.
Casey:
This is so gross.
Casey:
I'm not as riled up as a lot of people about these AI bots crawling my website.
Casey:
Sitting here now, I don't find it that off-putting.
Casey:
I don't love it, but whatever.
Casey:
This, though, this is disgusting.
John:
So this is such a weird statement because everybody knows how copyright works.
John:
I'm sure this person knows as well.
John:
But to say that like, oh, once you put it on the web, it's freeware, which is a term that mostly applies to software.
John:
But like the idea is you can recreate it, reproduce it, you know, copy it.
John:
Like, no, no, no.
John:
Like those are specifically the things we actually do have laws around.
John:
What we don't have laws around are the more complicated things like, well, can I train AI on it or whatever?
John:
And we'll get to that in a little bit.
John:
But, like, it's such a weird thing to say that, like, oh, as everyone knows, since the 90s, once you put it on the web, you forfeit all ownership.
John:
That's not true at all.
John:
And I think that's, like, it's one of the things that's great about the web is, oh, it's just like books.
John:
It's printed word, right?
John:
And especially in the beginning, it was just a bunch of words.
John:
And we already have laws surrounding that, right?
John:
And that's why there were cases about searching.
John:
It's like...
John:
Are search engines copying it?
John:
Because, you know, we got this whole, you know, giant library of laws about copying text.
John:
My website has text on it, and Google's copying it.
John:
And they had to duke it out and say, actually, what Google's doing is, you know, fine within these parameters, blah, blah, blah, right?
John:
But that fight was fought because it was an example of copying.
John:
But, yeah, this...
John:
i don't i mean obviously the the microsoft ai uh leadership this guy is not a lawyer either and and so but like it's that's not how you should defend this you shouldn't defend it by saying uh you know everything on the web is a free-for-all because that's never the way it's been and it's not the way it is now this is a yet another foot in mouth problem from microsoft i'm not sure what's going on over there but they really need to uh
John:
take a lesson from Apple and maybe try to speak with one voice instead of having individual lieutenants make really terrible statements to the press.
Casey:
Yeah.
Casey:
So Louie Mantia writes with regard to permissions on AI training data from the 22nd of June.
Casey:
Louie writes from John Gruber today on the 22nd of June.
Casey:
It's fair for public data to be excluded on an opt-out basis rather than included on an opt-in one.
Casey:
And then Louis continues, no, no it's not.
Casey:
This is a critical thing about ownership and copyright in the world.
Casey:
We own what we make the moment we make it.
Casey:
Publishing text or images on the web does not make it fair game to train AI on.
Casey:
The public in public web means free to access.
Casey:
It does not mean free to use.
Casey:
Also, whether reposting my content elsewhere is in good faith or not, it is now up to someone other than me to declare whether or not to disallow AI training web crawlers in their robots.txt file.
Casey:
To add insult to injury, that person may not have the knowledge or even the power to do so if they're posting content they don't own on a site that they also don't own, like social media.
John:
So he's so close to getting to the crux of this.
John:
In the first little paragraph here, he's basically declaring that training AI on your data is exactly the same as copying and reproducing it.
John:
And that is not something that the world agrees on.
John:
Louis' opinion is that it is.
John:
The courts have not yet weighed in.
John:
I think to the average person they would say, are those the same things?
John:
Because they seem like they might be a little bit different.
John:
Kind of in the same way that indexing your content in Google is a little bit different than just literally copying it and reposting it on the website, right?
John:
But anyway, if you agree that it's the same as copying, then yeah, sure.
John:
But then the second bit is getting to even more of the heart of it here, which is like, okay, so let's say we do agree that it's the same, which, you know, not proven yet, but anyway.
John:
Um...
John:
What about when somebody like posts a link to your site on a social media network?
John:
And on that website, they do a little embedding inlining of like the first paragraph or whatever.
John:
Like what if someone copies and pastes a paragraph of your thing on another website?
John:
Right.
John:
Even if you had absolute somehow magical technical control to stop AI crawlers crawling your website, if people can read your website and read.
John:
quote from it or embed little portions of it or a screenshot or do whatever on other websites of course you don't control those other websites and so if they allow crawling your stuff's going to end up in the google search index in the ai training model or whatever even though you disallowed it from your website and i i would say that for the most part that we also have laws covering can someone take a portion of the thing that you made and
John:
And quoted elsewhere, there's all legal framework deciding whether that is fair use or not.
John:
And it's complicated.
John:
And the law is not a deterministic machine, as Neil Patel, who I mentioned before, is always fond of saying.
John:
But we do have a legal framework to determine, can I copy and paste this paragraph from this thing on this person's site and quote it on my site so I can comment on it?
John:
Yeah, in general, you can.
John:
Can I make a parody of this article on my website?
John:
Yeah, you can.
John:
There's a whole bunch of things around that have been fought out in court that we have a system for dealing with.
John:
But all of those things, the court determines, you sue them and they say, actually, this person was allowed to quote that snippet.
John:
You lost your fair use case because it's pretty open and shut.
John:
That's fine.
John:
That just got indexed by an AI training bot because that person's website allows them, you know, the polite AI bots or never mind again.
John:
Never mind that you can't stop them.
John:
Right.
John:
That's just the nature of publishing.
John:
No matter what, you do not have absolute control of every single character that you made.
Right.
John:
You do have control over the entire work and the reproduction of the entire work, but you don't have control over other examples of fair use.
John:
And Louis saying, oh, the it shouldn't be like I shouldn't have to opt out.
John:
The default should be that nobody can crawl me.
John:
I mean, that's just not only is it technically impossible, but like.
John:
And that's not the way the web has ever worked.
John:
It has always been, we're going to crawl you unless you tell us don't.
John:
And even the polite ones, you know, they'll read the thing that you said not to do it, but by default, they're going to crawl you.
John:
And I think asking for a world where everything you publish on your website is not only not crawlable by the things you don't want it to crawl, but also not able to be quoted by other people is, uh,
John:
clawing back uh rights that we've already decided belong to other people through fair use so then the music industry decided to get involved yeah multi-billion dollar companies are entered the chat as they would say we talked about this before of like hey louis maintain doesn't want people crawling his website what can he do about it he's just one person uh the music industry they have a lot of money they have a lot of ip this is where uh the stuff really starts going down
Casey:
Yeah, so reading from Ars Technica on the 24th of June, Universal Music Group, Sunny Music, and Warner Records have sued AI music synthesis companies Udio and Suno for allegedly committing mass copyright infringement by using recordings owned by the labels to train music-generating AI models.
Casey:
The lawsuits filed in federal courts in New York and Massachusetts claim that the AI company's use of copyrighted material to train their systems could lead to AI-generated music that directly competes with and potentially devalues the work of human artists.
Casey:
So from The Verge article, there's a quote from RIAA Chief Legal Officer Ken Dorishow.
Casey:
And that quote is, And again, that was the RIAA Chief Legal Officer.
Casey:
Mikey Shulman, the CEO of Suno, says the company's technology is transformative and designed to generate completely new outputs, not to memorize and regurgitate pre-existing content.
Casey:
Shulman says Suno doesn't allow user prompts based on specific artists.
Casey:
Reading from the lawsuit, the use here is far from transformative as there is no functional purpose for Suno's AI model to ingest the copyrighted recordings other than to spit out new competing music files.
Casey:
That Suno is copying the copyrighted recordings for a commercial purpose and is deriving revenue directly proportional to the number of music files it generates further tilts the fair use factor against it.
Casey:
Andy Baio writes, 404 Media pulled together a video montage of some of the AI-generated examples provided in the two lawsuits that sound similar to famous songs and their recording artists.
Casey:
Then finally, we'll put a link in the show notes to a Verge article that discusses what the RIAA lawsuits mean for AI and copyright.
Casey:
You know, I saw somebody say this a few days ago.
Casey:
I don't remember who exactly it was, but what's going on if the RIAA are suddenly the good guys?
Casey:
Like,
John:
this is a weird place to be are they though though so here's here's the thing like this is the tricky bit with this and we talked about this with the image generators or whatever so this is significant because they're big rich companies and they're you have to take them seriously when they bring a lawsuit because this is the kind of like who can stop open ai and google and whatever well you know it's clash of titans you need other titans in here to be duking it out right
John:
And this is – I think this needs to be fought out in a court in some way.
John:
I say that before we see what the result will be because maybe the result is not what we want to happen.
John:
But like as with the image things, these companies that, you know, you type in a string and they produce a song for you, right?
John:
Yeah.
John:
These models are trained on stuff.
John:
And these record labels say, yeah, you trained them on all our music, right?
John:
Gets back to the question, is training something?
John:
Is AI training?
John:
How does that relate to copying?
John:
Is it just like copying?
John:
Is it not like copying at all?
John:
Is it somewhere in the middle?
John:
Do any of our existing laws apply to it?
John:
And we've discussed this on past episodes as well.
John:
Especially when the company doing the training then has a product that they make money on.
John:
And as I said, with the image training, these models that make songs are worthless without data to train them on.
John:
The model is nothing without the training data.
John:
This company that wants to make money, you pay us X dollars, you can make Y songs, right?
John:
That's their business model.
John:
They can make zero songs if they have not trained their model on songs.
John:
So the question is, where do those songs come from?
John:
If they've licensed them from somebody, if they made the songs themselves, no problem, right?
John:
Again, Adobe training their like image generation models entirely on content they either own or licensed.
John:
Nobody's angry about that.
John:
That's the thing you're doing.
John:
You own a bunch of images.
John:
You license them from a stock photo company or whatever.
John:
You train your models on them.
John:
You put the feature into Photoshop.
John:
You charge people money for Photoshop.
John:
They click a button.
John:
It generates an image.
John:
Whether people like that feature or whatever, legality seems fine.
John:
These other situations where it's like, hey, we crawled your site because we don't care about your robots.txt.
John:
We trained our models on your data, on your songs, on your whatever, right?
John:
And by the way, we have no idea if these companies actually paid for all the songs.
John:
Let's just assume they did.
John:
They bought all the songs from, you know, Sony Music, Warner Records or whatever, or they paid for a streaming service.
John:
They got all the songs, they trained their model, and then they're charging people to use their model, right?
John:
Just like the image processing, I've always thought that
John:
If you have a business that would not be able to exist without content from somebody that you did not pay anything for, that is very different than, oh, we trained an AI model for research purposes, or we trained it for some purchase that is not literally making money off of you.
John:
And this particular case is like, okay, not just that they're making money, but the thing they're providing is, quote, not transformative.
John:
They keep using that word because that's one of the tests for fair use.
John:
Is the work transformative?
John:
Have they taken the thing that existed but made something new out of it?
John:
And they'll argue that in court, whether it is or not transformative.
John:
And also, is it a substitute?
John:
This is another one of the fair use tests.
John:
Is it a substitute for the product?
John:
Is someone not going to buy a Drake album because fake Drake sounds just as good and they just listen to fake Drake, right?
John:
Is it a substitute for it?
John:
It doesn't mean it doesn't sound exactly like it.
John:
That's a whole other...
John:
sad area of law of like does song a sound too much like song b and they have to pay them whatever when they're all made by humans right this is like you know would someone pay for this instead of paying for this is one a substitute for the other and that's what they'll be duking it out about but i think at its root it is it's sort of like where does the value of this company come from and
John:
Every company has to take inputs from somewhere.
John:
They manufacture something and they sell it to you, or they have a service, they wrote the software for it, they pay someone to run the servers, and they sell it.
John:
There's sort of a value chain there.
John:
And a lot of these companies are like...
John:
And we would make more money if we don't have to pay for the things that make our product valuable.
John:
So we don't want to have to license all the music in the world, but we do want to train an AI model on all the music on the world so that we can make songs that sound as good as all the music in the world, but we don't want to have to pay for any of that.
John:
And that seems to be...
John:
not a good idea for from my perspective and this is one of like the different ways you can look at this moral ethical legal i think one of the what the the frameworks that i've falling back on a lot is practical if you know for any given thing say if we allowed this to happen would it produce a viable sustainable ecosystem
John:
Like would it produce a market for products and services?
John:
Would it would it be a rising tide that lifts all boats or would it like burn the forest to the ground and leave one tree left in the middle?
John:
Right.
John:
You know what I mean?
John:
Like that practical approach.
John:
People like to jump on like we talked about before with the teaching and Mac stories and everything like they want to go to the moral and ethical thing.
John:
They're stealing from us.
John:
It's our stuff.
John:
They have no right.
John:
And even what I was saying before, like, oh, they don't want to pay for this stuff, but they want to make money off of it or whatever.
John:
But practically, and this is not the way the law works, but this is the way I think about it.
John:
Practically speaking, I'm always asking myself, if this is allowed to fly, what does this look like?
John:
Fast forward this.
John:
Is this viable, right?
John:
If everyone's listening to Fake Drake,
John:
does drake the next drake are not able to make any money does human beings making music become an unviable business and all it is is just an increasingly gray soup of ai generated stuff that loops in on itself over and over again right like where are the you know and we have the same thing with publishing on the web like does google destroy the entire web because no one needs to go to websites anymore they just go to google right
John:
Unfortunately, when these cases go to court, no one is thinking that.
John:
That's not how the law works.
John:
The law is going to be.
John:
Is this fair use or whatever?
John:
Does Congress pass new laws related to this or whatever?
John:
But what I really hope is that the outcome of all these things and the thing I'm always rooting for is can we get to a point where we have an ecosystem that is sustainable?
John:
Which means it's probably, you know, whatever they're suing for is like they want like $150,000 for every song or something.
John:
That is not a sustainable solution.
John:
You can't train an AI model when you pay $150,000 for each song that you trained it on because you need basically all the songs in the world.
John:
That's a big number.
John:
That's stupid.
John:
We do want AIs that can make little songs, right?
John:
I think that is a useful thing to have, right?
John:
So we need to find a way where we can have that, but also still have...
John:
Music artists who can make money making actual music, setting aside the fact that the labels take all the money and the artists get barely anything anyway, which is separate issue.
John:
Right.
John:
And there was a good article about that recently, about how the labels, the label Spotify and the artists and the terrible relationship there that screws over artists.
John:
Anyway, I think I really hope that the outcome of this is some kind of situation where.
John:
where there's something sustainable.
John:
There's like, I keep using ecosystem, but it's like, you know, you have to have enough water, the whole water cycle, this animal eats that animal, it dies, it fertilizes the plant, like the whole, you know, a sustainable ecosystem where everything works and it goes all around in a circle and everything is healthy and there's growth, but not too much and not too cancerous.
John:
And it's not like everything is replaced by a monoculture and only one company is left standing and all that good stuff, right?
John:
But right now the technology is advancing in a way that if it's not,
John:
If we don't do something about it, the individual parties involved are not motivated to make a sustainable ecosystem, let's say.
John:
I mean, that's kind of what the DMA is about in the EU.
John:
And these AI companies definitely are not motivated to try to make sure they have a sustainable ecosystem.
John:
They just want to make money.
John:
And if they can do it by taking the world's music and selling the ability for you to make songs that sound like it without paying anything to the music that they ingested, they're going to try to do that.
Casey:
I don't know.
Casey:
It's all just so weird and gross.
Casey:
And it's hard because I don't want to be old man who shakes fists at clouds, right?
Casey:
And it seems like AI, for all the good and bad associated with it, is a thing.
Casey:
It's certainly a flash in the pan for right now.
Casey:
But I get the feeling that where...
Casey:
blockchain and bitcoin and all that sort of stuff was very trendy but anyone with a couple of brain cells to rub together would say ah that's all going to fade or it's certainly not going to work the way it is today i think there's a little of that here but i i get the feeling that this is going to stick around for a lot longer and i think that there needs to be some wrangling done some legal wrangling and you
Casey:
I get the move fast and break things mentality of these startups that are doing all this, but I don't know.
Casey:
It just feels kind of wrong.
Casey:
Like, again, I'm not nearly as bothered by it as some of our peers are, but it just doesn't feel right.
John:
It definitely doesn't feel sustainable, like practically speaking.
John:
Regardless of how you feel about right or wrong, if we just let them do this,
John:
like and these you know these models get better and better and produce more and more acceptable content you can see that it's taking again regardless of how this lawsuit ends up with the whole record labels you can see that it is taking value away from human beings making music and pushing that value to models making music but those models are absolutely worthless without that human generated music at least initially
John:
Right.
John:
Again, maybe in the future, there will be models trained entirely on model generated music.
John:
But then you have to trace it back to where that model get trained.
John:
Like in the end, these models are trained on human created stuff.
John:
And there's there may not be enough officially licensed human created stuff to train them on at this point.
John:
I think we want these tools.
John:
They are useful for doing things.
John:
Even if you think, oh, they make terrible music.
John:
Sometimes people need terrible music, right?
John:
Sometimes people just need a little jingle.
John:
They can describe it.
John:
They want it to be spit out, right?
Marco:
By most people's definitions, all of my music is terrible music.
John:
They do useful things like unlike, you know, cryptocurrency, which does a very, very small number of useful things that is not a general purpose.
John:
The, you know, AI models do tons of useful things.
John:
Apple's building a bunch into their operating systems.
John:
You know, people use them all the time.
John:
They do tons of useful things, right?
John:
We should find a way for them to do those things.
John:
without destroying the ecosystem i think we can find a way for that to happen if you look at the awful situation with like spotify and record labels and music artists that's a pretty bad version of this and yet still it is better than spotify saying we're going to stream all these songs for free and not pay anybody right i wish i could find that article for notes i'll try to look it up
John:
But even that, even that is better than the current situation with AI, which is like, we're just going to take it all for free.
John:
Come sue us.
John:
And they say, okay, we are suing you and they'll battle it out in court.
John:
But like either way this decision goes with the music thing, it could go bad in both directions because if they say, oh, you're totally copying this music, all AI training is illegal.
John:
That's terrible.
John:
That's bad.
John:
We don't want that, right?
John:
And if they say, no, it's fine.
John:
It's transformative.
John:
You can take anything you want for free.
John:
That's also bad.
John:
So both extremes of the potential decision that a court can make based on this lawsuit are really bad for all of us for the future.
John:
So that's why I hope we find some kind of middle ground.
John:
Like, again, with Spotify, they came up with a licensing scheme where they can say, we want to stream your entire catalog of music.
John:
can we figure out a way to exchange money where you will allow that to happen legally?
John:
And they came up with something.
John:
It's not a great system they came up with.
John:
Again, if I can find that article, you can read it and see how bad it is.
John:
But they didn't just take it all for free, right?
John:
And the music labels didn't say, okay, but every time someone streams one of these songs, it's $150,000.
John:
That's also not sustainable.
John:
So obviously they're staking out positions in these lawsuits and they're trying to put these companies out of business with big fees or whatever, but...
John:
yeah this is like it's scary it's scary when titans clash um and i do worry about how the result of these uh cases are going to be but i think i either think we have to have these cases or and i know this is ridiculous in our country or we have to make new laws to address this specific case which is a different enough from all the things that have come before it that we should have new laws to address it and i would it would be better if those laws weren't created by court decisions um
John:
But our ability and track record for creating technology related laws for new technology is not great in this country.
John:
So there's that.
Casey:
Yeah.
Casey:
And then it continues because Figma, a popular, I don't know how to describe this, like a user interface generation tool.
Casey:
Design tool.
Casey:
Yeah, design tool.
Casey:
Thank you.
Casey:
They pulled their AI tool after criticism that it blatantly ripped off Apple's weather app.
Casey:
So this is The Verge by Jay Peters.
Casey:
Figma's new tool, MakeDesigns, lets users quickly mock up apps using generative AI.
Casey:
Now it's been pulled after the tool drafted designs that looked strikingly similar to Apple's iOS weather app.
Casey:
In a Tuesday interview with Figma CTO Chris Rasmussen, I asked him point blank if MakeDesigns was trained on Apple's app designs.
Casey:
His response?
Casey:
He couldn't say for sure.
Casey:
Figma was not responsible for the training AI models it used at all.
John:
Who knows who trained it?
John:
It's just our model.
John:
Do you know who trained it?
John:
I don't know.
John:
Does anyone know who trained it?
John:
We just found it on our doorstep, and this is a model.
Casey:
Would the real trainer please stand up?
Casey:
Quote, we did no training as part of the generative AI features, Rasmussen said.
Casey:
The features are, quote, powered by off-the-shelf models and a bespoke design system that we commissioned, which appears to be the underlying issue.
Casey:
So if you commissioned it, then you should know.
John:
We had someone else do it, and they gave it to us, and we just took it, and we were like, we didn't ask too many questions.
John:
We're just like, it's fine.
John:
Whatever you got, just give it.
John:
It's probably fine.
Casey:
The key AI models that PowerMake designs are OpenAI's GPT-40 and Amazon's Titan Image Generator G1, according to Rasmussen.
Casey:
If it's true that Figma didn't train its AI tools, but they're spitting out Apple app lookalikes anyway, that could suggest that OpenAI or Amazon's models were trained on Apple's designs.
Casey:
Open AI and Amazon didn't immediately reply to a request for comment.
Casey:
This is seriously the like the Spider-Man pointing at other Spider-Man's image.
Casey:
It's just it's not my fault.
Casey:
It's their fault.
Casey:
Well, it's not my fault.
Casey:
It's their fault.
Casey:
Oh, no, no, no, no, no.
Casey:
It's not my fault.
Casey:
It's their fault.
John:
I think it was OpenAI or whatever, the Sora model that makes movies, essentially.
John:
Someone who was responsible for that was asked in an interview, was your model trained on YouTube?
John:
They didn't give an answer.
John:
Like, maybe, I don't know.
John:
Listen, if you run an AI company,
John:
figure out how and where your models were trained i'm not i don't know what you like maybe you train them on good things bad things whatever but have an answer don't say we don't know we someone else did it we like like this seems like table stakes like you should know where and on what your model was trained i'm not not like granular like every single individual thing although ideally that would be great but it's too much i get it right but when someone says hey did you train on youtube you should be able to answer that with a yes or no
John:
right not weasel about and this one was this trained on apple's apps i mean anyone looking at it's going to be like well if it wasn't this is the world's biggest coincidence because it looks just like apple's app uh as gruber pointed out right down to the really weird like line chart that i never really understood until i saw it explained in apple's weather app right it was obviously trained on apple stuff but
John:
You have to have an answer, right?
John:
If you don't have an answer, say, I don't know, but I'll find out for you.
John:
And then come back.
John:
But, like, the bar is real low here.
John:
Anyway.
John:
Same situation, different thing.
John:
Images, songs, text, UIs.
John:
Uh...
John:
A mock-up tool that makes UIs, it's based on a model.
John:
That model is worthless without being trained on a bunch of UIs.
John:
Where are you going to get enough UIs to train it from the world of UIs that we take essentially without permission?
John:
Is that okay?
John:
If we sell that as part of our application, is that okay?
John:
i mean i i wrote a big post about this what in january like excuse me i made this yeah and i we talked about it on the podcast before and i took a while to write this because actually speaking of neil patel i was listening to the decoder podcast and there was an episode where i was debating with somebody about the new york times lawsuit at the time like new york times was suing some company that trained its ai on the new york times and they said you can't do that and
John:
Going back and forth about like, well, the model is just doing what a person would do and it's learning and blah, blah, blah.
John:
Is the person the same as the model?
John:
Does the model have the same rights as a person?
John:
And I was trying to write up something related to that.
John:
And as usual, writing helps me clarify my thinking.
John:
But it is a fairly complicated, circuitous route to sort of really dig down into that thought.
John:
to get to what's at the the heart of it uh and i wrote this thing and i think i did get to the heart of it as far as i was concerned but it's complicated so every time i try to like summarize it on the podcast i find myself like tongue-tied and you know you just quote from the paragraphs like i think if you read the post my thoughts are in there but a lot of people have read it and like no one has commented on it so maybe i'm doing a poor job communicating it but
John:
Uh, I was coming at it from the other angle.
John:
We talked all about training data in this section of the show here.
John:
I was coming at it from the angle of like, um, what was then one of the hot topics, which is say I use one of these tools.
John:
Say I use the Figma tool to generate a UI.
John:
I use the song tool to generate a song or whatever.
John:
Um,
John:
that thing that I made, what is the legal, ethical, moral, practical ownership deal with that?
John:
If I use Figma to make that auto create UI thing and it makes me a UI and I put that in my app, do I own that UI?
John:
If I make a song with the song making tools, do I have the copyright on that song?
John:
There's been legal cases about this.
John:
And I think the only ruling we have now is something like if you make it with an AI generator tool, you don't have the copyright on it or whatever.
John:
but the reason i got to that because i was getting with the whole like oh you know training is just like what a human would do they read all these articles in new york times then you ask the human the answer and they read all those articles and they know have the knowledge from reading those articles and they give you an answer well that's just what our ais are doing i'm like yeah but a human is a human and ai is an ai and is that really what the root of the thing it is and i kept chasing that down chasing that thought down and got to sort of the uh
John:
The thing that confers ownership, right?
John:
Like when you make something, it's yours.
John:
You write something on your blog.
John:
You have the copyright to it because you created it.
John:
It's so clear, right?
John:
What if you draw a picture on a piece of paper?
John:
Okay, you got the copyright on the picture, right?
John:
What if you use Photoshop?
John:
to make a picture well now you use this software tool written by a bunch of other people just plain old photoshop not like ai generator like photoshop 3.0 right with layers now uh you use photoshop but you didn't write photoshop a bunch of people wrote software to make photoshop then you then paid adobe for then they gave you that software product you use photoshop to make a picture but still we say well you made that picture you have the copyright on it you are the creator you own it right
John:
Then we say, all right, but what if you can't draw?
John:
What if you tell somebody, like, I can't draw.
John:
Here's what I want.
John:
I want this picture of whatever example I gave a thing, a polar bear riding a skateboard, but I can't draw.
John:
So I asked somebody else, can you draw me a picture of a polar bear riding a skateboard?
John:
So someone goes and they draw a picture of a polio riding a skateboard.
John:
At that point, the person who drew it owns it.
John:
Maybe they use Photoshop.
John:
Maybe they don't.
John:
They own it because they created it.
John:
They drew it.
John:
Right.
John:
But then you say, OK, this was a work for hire.
John:
I'll give you 10 bucks.
John:
And our contract says I give you 10 bucks.
John:
You give me the polar bear drawing.
John:
Now I own the polar bear drawing because I paid you for it.
John:
That is a market for creative works.
John:
Someone was an artist.
John:
I can't draw.
John:
They could.
John:
They drew it.
John:
They asked for money.
John:
I gave them money.
John:
They gave me the ownership of the polar bear drawing.
John:
The copyright is now mine, right?
John:
And the act of creation is clear.
John:
The person who drew it, they created it.
John:
I paid money for it.
John:
They sold me their creation.
John:
Now I own it.
John:
All, you know, normal, right?
John:
Now I say, make me a picture of a polar bear on a skateboard.
John:
But I don't say it to an artist.
John:
I say it to an image generator.
John:
It's the exact same thing as I did before.
John:
Before when I did it, it was clear that I don't own anything until I pay for that, right?
John:
Now, when I do that exact thing, but instead of typing it into an email to an artist, I type it into an image generator and I get an image back.
John:
Who created that image?
John:
I didn't create it.
John:
But if you're going to say I didn't create the one that the artist drew for me, because you just told the artist what to draw, but you didn't create it.
John:
Well, if I didn't create that one, I certainly didn't create this one because I literally did the same thing.
John:
I just typed the text in a different text field.
John:
It could literally be the same text.
John:
It'd be an AI prompt emailed to an artist or sent to an AI.
John:
So I'm not going to say that I am the creator of that.
John:
The AI model can't be the creator because computer programs can't own things.
John:
They don't have rights.
John:
Computer programs are made by people who have rights, just like people who wrote Photoshop.
John:
They have the rights to Photoshop and so on and so forth.
John:
But the people who wrote Photoshop have no rights to the things that people made with Photoshop, despite Adobe's little snafu with their license agreements recently, which they clarified.
John:
But anyway, so I didn't make that picture of the polar bear.
John:
The large language model didn't make it.
John:
who who owns that picture of the polar bear based on the act of creation where is the act of creation there how did that model create the polar bear well it created the polar bear picture because it had been trained on tons of other images that maybe were or weren't licensed but still i'm looking around of like if ownership is conferred by the act of creation and there's no act of creation here what the hell are we what what what's going on here who owns the picture of the polar bear and
John:
And that like every time I dig down into some kind of like, oh, AI is allowed to do this and you're allowed to train.
John:
It's just what people do or whatever.
John:
And computers aren't people.
John:
I always go through to looking for how we confer ownership of stuff like this, how we confer ownership of intellectual property, how we exchange money for intellectual property, how the market for intellectual property works.
John:
And none of the existence systems make any sense in a world where
John:
i can say the same thing to a human and a generator uh that is clearly not me creating anything and yet i do get a picture out of it that came from somewhere and there's no like there's no human actor it's an indirection right and so i think we need new ways to think about a new laws for that type of indirection to say what is the chain of ownership here it's kind of like not quite the same thing but remember the whole thing where like the
John:
like a monkey took a picture of itself with the camera do you remember oh yeah it was like a camera set up in the jungle or whatever and a monkey comes up to it and snaps a picture of himself and the photographer is like well it's my camera so i own the copyright to that picture like well doesn't the monkey own the copyright because it took the picture right and it's like but but the monkey can't own the copyright it's not a person right and believe me a monkey is way closer to a sentient being than an lm right it's like it's a real living thing no one's gonna argue in court that a monkey is not alive and they're gonna say well does it does it have legal rights well
John:
I would say a monkey has more legal rights than a large language model, which is just a bunch of numbers in memory.
John:
Right.
John:
And so this is the kind of conversation we're having.
John:
And honestly, this would be so much easier to have if we had actual artificial intelligence as in sentient artificial beings.
John:
But we don't.
John:
That's just science fiction.
John:
Large language models are not anywhere close to that.
John:
That would be so much easier because you'd be like, well, conscious beings have rights and we need the, you know, whatever.
John:
They always have names of us in sci-fi movies.
John:
The AI Consciousness Act of 2732 that gives rights to the AIs to avert a global war and plunge us into the Matrix apocalypse.
John:
You know what I mean?
John:
Like, it's so much easier when you say, well, people have rights and computer programs that are basically people have rights and it's straightforward.
John:
Yeah.
John:
but we're nowhere near there.
John:
So now we're arguing about monkeys, if they have the copyrighted pictures, and we're arguing about huge matrices of numbers, whether they can create anything.
John:
Or you're saying basically the people who wrote Photoshop own every picture that's made from it.
John:
Because they're like, well, no, the LLM doesn't own it.
John:
And the person who wrote the prompt doesn't own it.
John:
But you know who does own it?
John:
OpenAI.
John:
Because they wrote the program, they crawled all the pictures in the world, they trained the model that you paid to use,
John:
none of those answers are satisfactory in any way like it doesn't feel right it doesn't seem right it doesn't seem sustainable and yet we do need some kind of answer here even if the answer here is that anything again like that one law precedent we had is like if you make something out of ai you don't own the copyright on it it is not copyrightable nobody owns it it's garbage it's slop it's
John:
a thing that exists but nobody can claim that they own it so it is free for anybody to take and do whatever they want with but you certainly can't like sell it to someone because you didn't own it it's very confusing i know that i haven't made this any clear you can try reading my post to see if it becomes any more clear but really this is this is a dizzying topic if you think about it for any amount of time and i think a lot of people are doing a lot of feeling about it which makes perfect sense i
John:
And honestly, it is more straightforward to feel things about it than it is to think about it because thinking about it gets you into some weird corners real fast.
Casey:
It's just – it's a mess.
Casey:
It's a mess and I don't know what the right answer is, right?
Casey:
Like it's so gray from top to bottom and I just – I don't know.
Casey:
I just don't know.
Marco:
Well, and I think we're going to have to be fighting this and working this out for a while.
Marco:
I mean –
Marco:
Look at how much disruption to existing businesses, existing copyright law, and existing norms was caused by the web and then the rise of other things on the internet.
Marco:
This is how technology goes.
Marco:
There are massive disruptions to what has been established, what many people have held dearly.
Marco:
There's massive disruptions to that when new tech comes around sometimes.
Marco:
And sometimes it takes a decade or two to really settle out and work out what are the norms?
Marco:
What should the laws be?
Marco:
What does copyright mean in this new world?
Marco:
Things like that.
Marco:
That takes a long time to work out sometimes.
Marco:
The rise of these AI techniques and models is potentially as disruptive to existing business models and norms and perspectives as the web was when it first came out a thousand years ago.
Marco:
So I really think we're in for a while of just not knowing there's going to be a lot of damage and destruction along the path to get from where we are now to kind of where things settle out.
Marco:
It will destroy a lot of businesses and it will make it hard for a lot of people to do what they've been doing.
Marco:
It will also create a bunch of new businesses and create a bunch of new value and new opportunities and
Marco:
just like any other massive disruption.
Marco:
I think this is a very large disruption, and it's mostly only going to start to become visible of what the other side looks like just after a bunch of time has passed and we've gone through a bunch of messiness.
Marco:
And we're in such early days, it's really hard to know where we're going to end up right now.
John:
I feel like this is going to be, in some respects, not all, but in some respects, even more disruptive than the initial web, because the initial web was kind of like...
John:
text we have laws governing that it was a massive shift of wealth obviously newspapers go out of business craigslist gets rich you know what i mean like like but we saw that giant shift paper magazines like the shift of publishing right and web search and doing all that or whatever but during that entire thing people were upset and it was a big turmoil because it was like these things used to be huge every city had 25 newspapers a newspaper reporter was a big job and you know it's like and all of a sudden all that money's going elsewhere to these dot-com things or whatever but during that whole process
John:
There was mostly agreement that newspapers own what they publish, websites own what they publish.
John:
We have existing copyright laws for this.
John:
There's the whole Google search index thing that we can figure out and fair use on the internet and stuff.
John:
But in general, it was just a massive shift of power and money from older industries to newer ones, mostly following along the shape of laws and ideas and morals and ethics and societal understanding about the written word, mostly in the early days of the web.
John:
especially before social media really came and mixed that up a little bit right with the whole aggregation of humans all talking to each other and quoting things and linking out or whatever right that in hindsight that seems much less disruptive disruptive than this ai stuff which is like it's a free-for-all no one knows anything no one knows what's legal what's not what's sustainable what's not what should we do what can we do what are people doing how valuable is this how useful is it like
John:
just so many questions and we like all the laws that we have that seems like they could apply to this and some of them do apply it's like yeah but there's these huge areas where it's like here be dragons on the map they draw the big dragon and the thing is like nobody knows what's there and there's a lot of money behind it and a lot of people running in that direction and it's not even clear where or how this will shift the power like in the internet in the early days it was pretty clear paper newspapers power is going away from them
John:
and towards websites.
John:
Like, that trend was visible to anybody with a clue, and it was just a question of how fast, how hard, you know, whatever.
John:
Here, is this going to shift power massively to the record labels because they own all the music, for example?
John:
Or is it going to destroy them because everything they have is now worthless because AI models can be trained on it, and it's a perfect substitute for what they previously made, and no one wants anything.
John:
Like, you can't even tell which direction it's going to go at this point.
John:
It's so early, and I just don't think that was true of the web.
John:
So this is...
John:
An exciting time to be alive in many ways, especially if you're in any industry, any creative industry that involves intellectual property that AI touches at all.
John:
And at this point, that's nearly all of them, right?
John:
And right now, what it does is not, you know...
John:
Not particularly amazing, but it is good enough for so many use cases, and this stuff generally doesn't get worse over time.
Marco:
Thank you to our sponsors this week, 1Password and Photon Camera.
Marco:
And thank you to our members who support us directly.
Marco:
You can join us at tv.fm slash join.
Marco:
Members get a bunch of perks, including ATP Overtime.
Marco:
This is our weekly bonus topic that's an extra segment that only members get to hear.
Marco:
ATP Overtime this week is going to be about a rumor reported by The Information and Mark Gurman on
Marco:
um about some changes and plans to what apple is going to be working on for the next vision pro and kind of what they can maybe do to make the next vision pro cheaper and how they're going to possibly do this and everything that's what we're talking about in atp over time this week join now to listen atp.fm slash join thanks everybody and we'll talk to you next week
Marco:
Now the show is over.
Marco:
They didn't even mean to begin.
Marco:
Cause it was accidental.
Marco:
Oh, it was accidental.
Marco:
John didn't do any research.
Marco:
Marco and Casey wouldn't let him.
Marco:
Cause it was accidental.
Marco:
Oh, it was accidental.
Marco:
And you can find the show notes at ATP.FM.
Marco:
And if you're into Mastodon, you can follow them at C-A-S-E-Y-L-I-S-S.
Marco:
So that's Casey Liss, M-A-R-C-O-A-R-M-E-N-T-M-A-R-C-O-R-M-E-N-T-M-A-R-C-O-R-M-E-N-T-M-A-R-C-O-R-M-E-N-T-M-A-R-C-O-R-M-E-N-T-M-A-R-C-O-R-M-E-N-T-M-A-R-C-O-R-M-E-N-T-M-A-R-C-O-R-M-E-N-T-M-A-R-C-O-R-M-E-N-T-M-
Casey:
So long.
John:
Not-so-real-time follow-up on my earlier statement about Apple Silicon Macs not being able to use PCI breakout boxes.
John:
That is not true.
John:
You can use Thunderbolt PCI breakout boxes.
John:
Obviously, you can't use... Yeah, it's just not GPUs.
John:
Yeah, but you can't use GPUs internally either.
John:
That's the thing.
John:
Yeah.
John:
So, still, Apple should have put the Mac Pro in the configurator.
John:
Or I suppose they could have said, hey, do you use PCI cards?
John:
No one's buying it except you.
John:
You can use PCI cards?
John:
Well, you can buy a Mac Studio and also this third-party product that we don't even sell, or...
John:
You could buy a Mac Pro, which is the product in their lineup.
Marco:
I think two things are simultaneously true.
Marco:
Number one, they should keep making the Mac Pro because it does have uses.
Marco:
And number two, absolutely nobody should buy the Mac Pro effectively.
Marco:
Anybody who's going to a page on Apple.com saying, what Mac should I buy?
Marco:
None of those people should buy it.
Marco:
None.
John:
No, they should.
John:
The whole point of this is it's a path that leads to all of our products.
John:
And maybe there's only one very lonely, overgrown path that leads to the Mac Pro, but it's got to be there.
John:
Look, I would say number three, your product chooser should let you choose from any of the products, depending on which things you answer.
John:
Put as many scary questions in there as you want.
John:
There's just got to be a path that lands on the Mac Pro.
John:
Because otherwise, look, what they're saying with this is...
John:
no one should buy this product and i don't think apple believes that if you ask them they said well some people should like okay great but you have a tool that lets people choose and it has every single mac you sell except for that one that just seems like a bug to me someone should report it they should fix it you should report it i just wanted to make a mac pro that's worth buying i mean that's a bug maybe they're working on that we'll see
John:
so what i mean i i feel like we covered this in the past but what what are you waiting for like what what would make it worth i mean am i waiting for anything in particular i like i don't know like because again with the gaming situation on apple silicon max is entirely unclear if i did buy a mac with a big beefy gpu like bigger than a mac studio gpu that would be a speculative purchase it would not be like my current mac pro which i literally knew i could run windows games on and do and they work fine and i run literally put into windows like that's not speculative that's a thing right and
John:
If I decide, hey, I want a bigger than Mac Studio GPU in an ARM Mac, I am like crossing my fingers that some magical point in the future, I will be able to do interesting gaming things on it.
John:
I don't know if I'm going to make that speculative purchase.
John:
I don't know if Apple's going to make a Mac with a better than Mac Studio GPU in it.
John:
And maybe they make it and it's just too rich for my blood and I can't spend that much money on something speculative, right?
John:
Like I said, my default is an M4 Mumble Mac Studio is potentially the computer I will replace this with whenever they release that like next year or towards the end of this year or whatever.
John:
um but i would like to see you know show me something show me the mac pro show me something that's not a mac studio in a giant cavernous case right that's what i would like to see from them and then i can decide is it worth it for me to get that because it's not a slam dunk like the 20 well it's not as big a slam dunk as 2019 was because again that's just not speculative but
John:
It's just kind of wishful thinking at this point to think you're going to be running Windows ARM games natively on, you know, you're going to be booting Windows for ARM on your, you know, Apple Silicon Mac Pro, or you're going to be running Windows caliber games in macOS because Apple will have gotten all the AAA game developers on board.
John:
That is all just twinkling someone's eye right now.
John:
It is not a real thing.
Casey:
I just, I feel like, and I'm going to say this and I know, and I understand why it's not appealing to you, but I feel like so many of your problems would, well, maybe not even problems, but so much of your life would be so much better if you would just get a damn Mac studio and a damn Windows PC.
Casey:
And I get it.
Casey:
I don't want to run Windows anything.
Casey:
I don't.
Casey:
And I know you are even worse than me in this capacity, but like that would make so many things so much better in your life.
John:
I would probably have a gaming PC if I had a place in the house for it, but I don't.
Casey:
I mean, I hate to break it to you, but I really don't think that there is ever going to be a Mac Pro that does the things that your current Mac Pro does.
John:
I mean, that may be true.
John:
I'm rooting for it, but right now, the outlook doesn't look so great.
Marco:
Yeah, I would definitely not hold your breath on that.
John:
I mean, like the thing is, it's actually kind of if I thought like two years ago, like predicted how this would go.
John:
Actually, I'm kind of surprised at how much motion there is here.
John:
The copilot plus PC, how hard Microsoft is pushing into ARM PCs after doing such a bad job with Windows RT, right?
John:
Apple with its whole game porting toolkit, like both those parties, both Microsoft and Apple are actually surprising me with how hard they're trying to make my dream happen.
John:
They're just not succeeding.
John:
right but they are they're trying more than i thought they would right i did not i didn't think they'd be like both on both sides i have been pleasantly surprised by the additional effort that they are putting in i think everyone kind of is it's just like i just they're just not really doing it
John:
But I give them kudos for the effort.
John:
If I had to pick one thing, I would wish that Microsoft would commit to a transition to ARM, but that's not what they want to do.
John:
They seem to think that they're going to support x86 and ARM.
John:
forever off into the future which i think is a dumb strategy but that seems to be what they're doing and that doesn't help me and that doesn't help windows games get ported to arm that all that does is bifurcate their market and say well all the all the triple a games will still be on x86 with nvidia cards and arm will just free for people's laptops and microsoft may be perfectly happy with that but it doesn't help me over here with apple silicon
Casey:
I mean, what PC games are you playing with regularity right now?
John:
Destiny.
John:
I don't know if you know this, but Destiny runs on PC.
Casey:
But that's the thing.
Casey:
Like, is there no other appliance that you can buy to run Destiny?
Casey:
Can't you do it on PlayStation?
John:
Well, Destiny runs at higher resolution and higher frame rates on gaming PCs.
John:
I don't really play it on my Mac Pro.
John:
I play it on my PlayStation 5 for a variety of reasons.
John:
but it does run better as defined by resolution and frame rate even on my mac pro than on my playstation maxes out at 60 frames per second right and i can get higher than that um depending on settings and you can go way higher you can go i've played it i actually have played destiny on my ps5 at 120 frames per second on my tv but it has to lower the quality substantially and i generally don't play destiny on my tv because it'll burn it in right but i did try that just to see what it was like and
John:
120 frames per second is good all the like the destiny streamers who are out there playing destiny they occasionally have their frame rate displayed in the corner they're triple digits always hundreds of frames per second sometimes pushing them into 200 uh it makes a difference it it looks and feels smoother especially in pvp like that's you know and even if i'm playing on a controller because at this point sadly i'm better with a controller in destiny than i am with mouse and keyboard and
John:
And also, controller's way better for my RSI, so I'd be doing it anyway.
John:
But yeah, Destiny is one choice.
John:
And games come out all the time, and they come out for PC.
John:
They don't come out for the Mac until three years later when Apple puts them in a keynote, right?
John:
So, you know, there's past games, there's future games, there's my gigantic Steam library that I still haven't played through.
John:
you know i mean i'll get a playstation 5 pro i'll get a playstation 6 i do like consoles they're great maybe someday the gap between pc and console will be diminished even now i would say it's more diminished because 60 frames per second on ps5 is such a change from 30 on the ps4 that i feel like the gap has narrowed because destiny players were playing at 100 you know 200 frames per second back when i was playing 30 now they're playing at 100 200 frames per second i'm playing at 60
John:
Right?
John:
I'm gaining on them.
John:
So maybe at some point I'll be like, you know what, I don't need a big GPU and I'll just get a Mac Studio and be happy with it.
John:
And that's looking like the most likely situation right now.
John:
But, you know, we'll see.
Casey:
I mean, to be clear, as much as I'm giving you a hard time...
Casey:
I want you to have what you want.
Casey:
I can make an argument.
Casey:
Even I can make an argument for the Mac Pro, for a really beefy Mac Pro that's useful for people that work outside of a music studio.
Casey:
I'm not saying that your desires or wants, as much as I'm giving you grief about it, I'm not saying your desires or wants are unreasonable.
Casey:
I don't think Apple will be achieving them, but I don't think they're unreasonable.
John:
It's exciting that they did with 2019 because, again, I've said before, despite my gaming things, this is not a rational purchase in the same way that you don't need a Ferrari to get to work faster.
John:
People just like fast cars because they like fast cars.
John:
I just like powerful computers because I like powerful computers.
John:
It's exactly the same thing.
John:
Me trying to justify a Mac Pro is like someone trying to justify a Ferrari.
John:
It's like, well, I need a car this fast.
John:
to get to my work.
John:
No, you don't.
John:
Nobody does.
John:
But people want them because they're cool.
Casey:
Right.
Casey:
And that's fair.
Casey:
And that's totally fair.
Casey:
But I feel like from, to my eyes, we're starting to cross from, oh, it's kind of adorable that John is still rocking his Mac Pro to like,
Casey:
Man, I kind of want you to move on to a Mac Studio because I think you might enjoy it a lot more.
John:
Well, I mean, I'm not buying an M2 one at this point.
Casey:
Well, that's fair.
John:
No, that's fair.
John:
This is not the time to buy a Mac Studio.
John:
It isn't.
John:
It isn't.
John:
Hang it in there for the M4 one.
Marco:
Yeah, I think when the next one comes out, I think that's your move.
Marco:
I just cannot see a future in which they make the Mac Pro that you want.
Marco:
And so you might as well get the Mac Studio, which is the Mac Pro without slots.
Marco:
Like, that is the new Mac Pro.
Marco:
I can't say it enough.
John:
And with a wimpier GPU.
Marco:
But they just, like, the Mac Studio is the Mac Pro.
Marco:
They should have called it Mac Pro.
Marco:
That is the Apple Silicon Mac Pro.
Marco:
They should not have.
Casey:
No.
Casey:
Can you imagine the aneurysm he would have had?
John:
It doesn't make sense.
John:
They sell a thing called the Mac Pro that's way bigger.
John:
But it's the same computer.
John:
It's just a built-in PCI breakout box.
John:
I know, I know.
John:
It's still got the slots.
John:
It's still... Anyway, we'll see how it goes.
John:
And by the way, by the time I do get this, my computer is essentially five years old now.
John:
Already, this is a pretty good run for a computer that I bought just before the...
John:
processor transition right which is you know unfortunate for the we already said when it happened like oh my poor mac pro whatever but i love this machine and i've already gotten five years out of which granted is half of what i got out of my last mac pro but you know processor transition right so if i ditch this machine at six years old that's longer than any of your laptops lasted right it's still a pretty good run
Casey:
hey we're just excited if marco makes it six months much less six years he's been pretty good with 16 inch i think it's almost two years old now right no it's the m3 max it's uh it's the black one sorry i mean to be honest lately i haven't been much better so i shouldn't be casting stones in this glass house but generally speaking marco is much more frequent on on his purchases so i mean like no matter what like i feel like i've
John:
gotten a good run out of this mac pro and i'm enjoying it for you know as long as like i'm excited that sequoia runs on it that's cool next year probably not right so it's really putting a deadline on this like i said i'm willing to run this with last year's version of the operating system for some period of time if i have to wait right but you know well we'll see what happens like i'm you know i keep my cars for a long time i keep my max for a long time