Sort:  
There are 4 pages
Pages


Looks like this is the entire Lioness theme, great series but men Zoe does a great job at what could it be to such a bad ass agent like she is on the show, but still cracks up when it comes to her family, love this series

#war, #cartel, #tvonleo, #skiptvads, #zoesaldana

seeing Goatseus Maximus on cg trending is... hilarious

What's your take on Memcoins in this Bull cycle?

#freecompliments

they're a place where you can make a shit load of money with the right strategy. the other side of this is that you can easily lose. i'm gambling lol

No strategy can be right in gambling.

#freecompliments

that's not necessarily true. even just a risk management strategy is helpful. can be as simple as a limit on losses or a limit on what you enter with. still gambling, but you can take precautions.

If something is deeply personally important to me I will get it done, even if at first I have no clue how.

Sure...it's all about priorities in life.

#freecompliments

That's an awesome attitude!

Bom dia galera! Mais uma semana começando! Vamos de #threadcast

A velocidade tem sido minha formação rolante, alguns me chamam de mais rápido e furioso, mas alguns dizem que é uma corrida morta.. Como você me chama?

O que você acha se eu começar minha semana tomando café e tirando algumas fotos da natureza?

!DOOK


You just got DOOKed!
@calebmarvel01 thinks your content is the shit.
They have 17/200 DOOK left to drop today.
dook_logo
Learn all about this shit in the toilet paper! 💩

É um ótimo jeito de começar a semana!

Muito obrigado 😊😊😊.. Você me fez aprender português aos poucos.. haha

Ah, sim 👍👍👍, vamos começar esta semana em plena tensão ⚡⚡⚡.

Semana terá feriado... é tão bom assim que a semana passa mais rápido. kkkk

bom demais!

feriado pra mim acho que só no meio de NOvembro agora

Só espero que ano que vem seja bem melhor do que neste ano que quase não teve feriado no meio da semana.

Finalmente as eleições terminaram. Agora é encarar as consequências

Como previsto, Nunes levou em São Paulo. Vamos ver se este mandato ele será mais ativo.

Não a toa ele recebeu o apelido de "prefeito fantasma"

Não tenho nenhuma admiração por ele, mas se ele conseguir influenciar de alguma maneira a situação com a Enel, ganhará pontos comigo hahaha

A nossa guild brasileira no Splinterlands, Hive BR, atualmente a melhor guild do país, ficou em segundo lugar no último brawl!

Parabéns a todos! Embora tenhamos um rendimento bom razoavelmente, fazia tempo que não pegávamos uma colocação tão expressiva

Por falar nisso, aproveitei hoje para dar uma ajustada no meu deck.

Consegui upar algumas cartas legais então espero que isso ajude ainda mais nos próximos brawl

Aproveitei também para comprar alguns packs de Gladius com as recompensas do brawl

Tirei mais uma Quora! Agora só falta mais uma pra poder colocá-la no nível 3

A Quora recebe um upgrade significativo no nível 3 então espero que consiga a carta que falta em breve!

Vou aproveitar também pra ajustar meu portfolio de cripto. Por incrível que pareça, sobrou um trocado esse mês então vou comprar um pouco de $BTC, $ETH, $RUNE e $HBD

acabei de entrar na minha cidade agora... Em alguns minutos estarei em casa para preparar meu almoço..

Boa tarde pessoal! Que todos tenham uma ótima semana!

Brave camera man! #f1

Or a slow one. Things happen so fast, maybe he did not have time to react yet.

I think realistically he "double down" for the job...

As from other angles we can see his mates getting away...

ah so he is really s brave one. !BEER

Hi, @solymi,

This post has been voted on by @darkcloaks because you are an active member of the Darkcloaks gaming community.


Get started with Darkcloaks today, and follow us on Inleo for the latest updates.


Hey @forkyishere, here is a little bit of BEER from @solymi for you. Enjoy it!

Learn how to earn FREE BEER each day by staking your BEER.

There you go... here's the shot that proves it... I knew someone would eventually get it.

Hi, @solymi,

This post has been voted on by @darkcloaks because you are an active member of the Darkcloaks gaming community.


Get started with Darkcloaks today, and follow us on Inleo for the latest updates.

Es importante sanar nuestra relación con el dinero, así como con otras cosas, para poder encontrar el camino de nuestra autorrealización.

#humanitas

El dinero no es malo, la maldad está en la persona, que no lo usa rectamente.

#humanitas

Hola comunidad quiero contarles que descubrí en el arco y la flecha una nueva pasión, me encanto que al hacerlo práctico el estar consiente y presente, aquieto mis pensamientos lo que me permite ejercitar la serenidad.
#memories #Spanish

Solo quiero decir que no eres el único al que la vida le anda haciendo algunos chistes malos! 😂
Buenos días mi gente bonita!
Feliz inicio de semana

## Random Threadcast🧵


It's time to start our threadcast using the tag #randomthreadcast!

This thread will work by asking everyone a random question in different topics and they can share their thoughts and ideas about it.

This could be anything except personal questions! It could be lifestyle, finance, productivity, or anything you want!

I will start the thread by asking random questions and everyone can answer it. The asking part is not limited to me as everyone that will join the discussion can also ask questions and everyone can answer. This way, we will discuss different topics at once and share ideas.

Don't forget to use our tag #randomthreadcast and invite your friends so they can join the discussion 🚀

Let's start the party! 🎉

#threadcast #discussion

How do you cope up when you feel stressed?

I always eat food when I feel stressed. I think that's what they call as "stress-eating".

I don't know why but they are helping me to feel relaxed and stress-free.

Sometimes, I also give time myself to walk outside the nature and free from any gadgets. This way, I can think clearly.

Food and Massage and swimming pool (or any kind of submerge water). I find all of those relaxing. Meditation can help but I have to be in a relative calm state already before starting.

massage sounds nice! I never tried it and maybe I should try.

How many times do you do this per month?

Actually first one this year, I did it yesterday, that why I thought about it when reading you.

how was your first time massage?

I also want to try it.

Nearly lost my left arm 😅 Maybe my muscle weren't in the right place but I guess now they are. It was a strong massage but I think afterward it was beneficial and I probably would go again.

I hit the gym!

That's very productive!

On October 28, 1886, the Statue of Liberty was officially dedicated by President Grover Cleveland in New York Harbor. The statue, a gift from France, was meant to commemorate the friendship between the two nations and became an enduring symbol of freedom and democracy in the United States.

I didn't know about this, I'm more curious about the construction of statue of liberty.

Nice I didn't know such threadcast existed , that an innovative idea, let's see how it gets over time. Good initiative.

thanks for rethreading it - I found it because of you. I also like this concept

Thanks, man!

No problem, thanks to you for trying new things to make this place more alive.

yes! This is actually my first threadcast and I want to continue doing this and be more active.

Ha nice then, happy to be there on the first, You planning on doing it daily or more like at random times of the week 😂 ?

hahaha lol my initial plan is random times of the week probably 1-2 times a week. But yeah, it depends.

Ah interesting topic. Need to think what random information I can drop in here.

How do you view failure, and what have you learned from your mistakes?

I see failure as an opportunity to learn. I always take them as an advantage to move forward and to start again.

I think it is a great way to learn from your experience and use it as your motivation to get up again.
#randomthreadcast

How do you practice kindness in your interactions with others?

I always think that everyone has life challenges so I consider their feeling and opinion at all times.

This way, the kindness that you can give to them is being understanding, polite, and considerate.

#randomthreadcast

By treating everyone with respect, regardless of their social standing or anything else

You earned my respect for this 🫡

I have the same principles!
#randomthreadcast

What emerging technology are you most excited about, and why?

Of course, the answer will be obvious. I'm so excited with using AI in our daily life to boost our productivity.

I don't use AI to replace my work and I never thought of AI doing it. I think it can be a great assistant and help to guide is accomplish our tasks quickly.
#ai

Many believe that, right up to the point where the #ai replaces the job.

It will ultimately affect all knowledge work. Physical world does have a bit of a bugger since AI has no hands or feet. Robots do though.

I don't think AI can replace jobs (at least for now), if yes, maybe it will still be a long time.

But as of now, I see them as a tool to boost and give smoother workflow to everyone.

All the stuff related to space. I think they will impact a lot our knowledge of the universe. Also, some practical applications are being tried such as generating solar power from outter space, where it's never "night" or cloudy

Cool one. I am also interested with solar power. It's a very good alternative for electricity and they said you can save much $$$ for using it in a longer term.

How do you find balance to your work and personal life?

This question is not limited to those who works full-time. Everyone can answer this even if you are a student. I always find balance by setting a time for myself and never forget about my "me time".

No matter how hard the work is, no matter how many tasks I have, I do not let myself working fully everyday. I make sure that I have time for myself, friends, family, and loved ones.

#life
#randomthreadcast

I don't, sadly... it's my target for next year. This year I was able to improve my health and my finances but my personal life really suffered

I don't it's a good indication of a good lifestyle. Give yourself a free time sometimes. Good luck!

This is a great idea. It is a terrific way to get different questions and responses going.

Are you enjoying watching the collapse of Hollywood?

The centralization of films and television shows is dying.

What are your thoughts?

I don't personally enjoy the "collapse" because they can still produce TV shows for entertainment.

However, I like the way how streaming services adapts to technology to distribute and deliver content on their audiences.

#randomthreadcast

They can still produce but other players are entering.

Yep! It's slowly dying I guess.

I like it. Opens room for different industries

Yes it does. It also decentralizes the influence.

Last winter I found an icy river and attempted to break the ice with my boot. Unfortunately I succeeded...

💧 💧 💧 👞 💧 💧 💧

oww shoot, i hope that's not illegal 😂

I've seen this poll by @walterjay and I think it's worth to add to our #randomthreadcast

I do think the election has a huge contribution to the whole market. Where do you think will the market go after the election?

https://inleo.io/threads/view/walterjay/re-leothreads-kb1yicw2?referral=walterjay

Have you ever gone sky diving?

the thought terrifies me!

No have you?

Why jump out of a perfectly good airplane?

I mean do people willing jump out of moving cars.

Hell no. I don't get why people do that lol

hell nah, and i think I will never try this one. I'm afraid of heights!

How about you?

Never have and never will

haha we're the same! may I know why?

I just dont find it amusing at all lol

kinda pointless to me :P

oh i think people have different preference on life!

Did you know that 9% of swedens territory consists of water? 🌊🌊

Hell nah! That's interesting fact:)

What's one habit you've adopted this year that has significantly improved your life?

The habit that I recently adopted and improved my life is writing consistently. I always do my best to find time in writing to improve my vocabulary and creativity.

This consistent writing is not only in a form of a blog post or long-form content but I also take advantage of having short-form content platform such as InLeo to share my ideas and thoughts.

I always see it as an opportunity to build my "second brain" on the blockchain that I can go back soon.

#randomthreadcast

Working out and eating healthier. It literally changed my life

Nice! You're healthy based on your answers!

I like this idea!

thanks! i might continue doing this in the community as i am getting good feedbacks

Good morning lions!
I had some issues with posting via mobile yesterday, lets hope i can fix it today!
Till, then - lets have a laugh!

Good morning as the sunshine tries to break through my windows. Have a wonderful and easy week!

#freecompliments #gmfrens

Good Morning.

Have a wonderful and green week.

#freecompliments

A lite dusting of snow in Western Newfoundland this morning.

Onset of winter?

#freecompliments

Not really, we always get 1 snow fall before halloween. Then mid Dec before it comes for good.

Okay....enjoy snow fall Sir.

#freecompliments

I fracking hate it. lmao

The Christmas season in taking it course. Keep your coffee close.
#gmfrens #freecompliments

That will be gone in a few hours ;) !BBH !DOOK


You just got DOOKed!
@bradleyarrow thinks your content is the shit.
They have 11/60 DOOK left to drop today.
dook_logo
Learn all about this shit in the toilet paper! 💩

@daniasi! @bradleyarrow likes your content! so I just sent 1 BBH to your account on behalf of @bradleyarrow. (11/100)

(html comment removed: )

Que la arepas no falten en tu mesa, feliz día #hivenftgamelatino #spanish #bbh #ladiesofhive #humanitas #gmfrens

yummy, Care if I take a bite? Lols.
#gmfrens

and even two bites, shared food tastes better, happy day

that's very kind of you, thanks. Enjoy your day.
#freecompliments

just enjoy your day

Se ve muy rico

Gracias lo compartimos

viste @bulkathos.. tu me brindas una arepa de esas y dejo de pelear tanto contigo.

On July 9, 1776, Colonel John Neilson gave one of the earliest public readings of the Declaration of Independence in New Brunswick, New Jersey, marking a historic moment in American history. 📜🇺🇸 #americanrevolution #facts

Wow! A glance into history. Wondering how influential he was?
#gmfrens #freecompliments

one of the very very very early influencers on social media sharing the latest news lol

Some tokens like $BEAM are in green despite general #altcoin down turn

#gm everyone. Ready for another successful week? Share with us your #selfie empowered im this #threadcast

I need to share a mate with someone!

#selfie #memories

Si es con azúcar, me apunto.

Una selfie con esta belleza, buenos días.

#selfie #memories #spanish #dog

La mascota de Harley Queen!

Awesome! Love it

Thank you :) !BBH !DOOK


You just got DOOKed!
@bradleyarrow thinks your content is the shit.
They have 12/60 DOOK left to drop today.
dook_logo
Learn all about this shit in the toilet paper! 💩

@mamaemigrante! @bradleyarrow likes your content! so I just sent 1 BBH to your account on behalf of @bradleyarrow. (12/100)

(html comment removed: )

Hola comunidad ¿Cuándo fue la última vez que hiciste algo por primera vez? #selfie #memories #Spanish #motivation

Yo adoro las primeras veces... Trato de que una vez a la semana sucedan.

Si tienen esa magia que hacernos sonreir, me encanta que tenga esa meta de hacer que sucedan una vez a la semana. 😊

ja, ja, me encantan esos juegos del amigo secreto.

Checking who is doing homework by sharing #selfie, #memories

Why does this girl eat strawberries with that face? #selfie, #memories

#delta sues #crowdstrike for past #it outages for $500 Mio., it may be a good buy for #microsoft

BMW is making a bold move in automotive design. The company has announced that its future gas-powered cars will adopt the same Neue Klasse design language as its electric vehicles.⁠

Man the old designs were so much better .

#bmw #car #design #newsonleo #automotive

I don't think I'm fit to lead FreeCompliments anymore. I've fucked up.

What has happened? 😱

I happened

I'm truly sorry that I do not know what to say. I hope you can share your troubles if it helps...

What happened dear? You have phenomenal all along...in this journey.

#freecompliments

So You all probably know Gotye's "Somebody That I Used to Know." Released in 2011, the song topped charts in 23 countries and has been streamed billions of times. But here's the shocking part: Gotye (real name Wally De Backer) didn't get rich from it.
Wrote a short blog about this interesting story .
#linkincomments #gotye #musicfacts #behindthesong #somebodythatiusedtoknow #freecompliments

The user is required to time the launch of spectacular jumps with a single click in the event that the character collides with obstacle.
geometry dash

#gmfrens #freecompliments

Good morning/afternoon to everyone on INLEO

What lies behind us and what lies before us are tiny matters compared to what lies within us. — Ralph Waldo Emerson

#thoughtoftheday #quotes

Good Morning.

Have a GREEN Monday.

#freecompliments

Vent your anger #threadcast!!!

Want to spam your negative feelings on threads? Come on here. We can take it under this threadcast!

I hate Service Now.

"Service" can mean many things, though...

Service Now is a workflow management platform

😆

Our organization suddenly halted the process of acquiring it after spending tens of millions looking into it for years LOL

oh wow

hopefully they migrate to a better system!

haha I guess time will tell

Hope you'll find some fun in my madness, and hopefully I'll find some fun in it as well, when I'm finished. In the meantime, the fact that I'm doing this at all is a testament to how weak and pathetic I am. None of our forefather acted out like this. I'm simply a shame to those around me.

Me too, man... Sometimes I feel like a burden to my family. Not today, but sometimes...

None of our forefather acted out like this. I'm simply a shame to those around me.

You are loved

And you deserve to be loved

If I am, maybe you are too. Don't worry, it'll pass.

I should be the one who passes, but as I said elsewhere, that would be too easy. This is not how a man (or frankly, a normal human adult) thinks. What in the hell is wrong with me yet again... I know what is, and it's pathetic.

I'm not as a good of a person as I may seem here on threads. People can be good in one place and terrible at another. It's okay to live, even if you feel you don't deserve to.

There's at least one person who will be sad if you're not there anymore...

How can my family possibly be proud knowing that this is what they raised? All their intensive efforts to create a functional human being have been reduced to this total waste. And the sad thing is that they might even blame themselves to any degree when the truth is that it's all me. While I seek to improve, I hope that my failures as a human being, as a son, and as a man will not reflect on them.

Let those who read this laugh, not pity me, because I deserve no pity. I let myself out of my own control, like a child who does not know any better. Even in my normal state, I do not behave like a man. This is not to be pitied or sympathized or empathized. I do not deserve that. I deserve the pain of consequences until I change my ways. A LOT of pain.

Men did what they were supposed to do without a second thought. It comes naturally to them. Not to a moron like me. What kind of a piece of 💩 am I to not be able to do the same? I have gone so wrong, so bad. What a waste of a life.

Maybe someday I'll be a man, but it'll most certainly involve me never doing something like this again, or even getting it into my mind that this is remotely acceptable. How sad is it that I had an entire venting threadcast inspired by my insipid moaning and whining? Can I ever even be a man having behaved like this? I doubt it, but I certainly must try by acting the right way. Even if I'm never truly a man, at least that would be the right step to take.

It IS sad, but not as pitiful as you might imagine... We all have bad days like this, (and the lucky people who don't have them, are just lucky,) I'll be happy if someone made a threadcast like this for me, that's why I did this for you.

How sad is it that I had an entire venting threadcast inspired by my insipid moaning and whining?

I shouldn't be pitied, but spit upon and laughed at until I behave the way I should. Some people can only learn this way. I can't possibly be happy that I devolved into this. I've burdened you enough. I shouldn't be responding back to your original threadcast, but rather to myself. Sorry to you, and sorry to everyone else. And most importantly, to those close to me. Something I must show through action.

That's right, beat yourself up harder, you moron. That'll solve all your problems. Hitting yourself will really do the trick, won't it? Doing that instead of productively working out how to fix yourself. What a goddamn idiot.

Maybe if you beat your own brains out to the point of permanent traumatic brain injury you'll be less of a burden than someone who's self aware enough to continue doing the damage you're doing. Or, instead, you could act like a man and solve your problems that you create! Stupid little child, little boy.

What a weakling. Become stronger, you little freak. You're not a man. Man up.

Stop looking at your replies for attention, you attention-seeking little crap. Nobody cares nor should should care to give you the time of day, or even a fleeting moment's thought. Go away. You're unliked and unwanted.

Who are you to ever criticize anyone when you're such a complete disaster? I revert all of my negative criticism towards others. It is invalid. Only rational, grown adults should have their criticisms validated.

Once I grow up and change myself into an actual man, then my new thoughts might be valid. Assuming that actually happens.

Apologize to your family, not seeking their forgiveness, but rather to ensure that they are at peace. You deserve no peace of mind for what you've done to them. Your rights to peaceful living should be permanently revoked. Give it to your family instead, the ones who actually deserve it, and from whom you've taken so much, too much, you disgusting leech.

Shut yourself up already. Nobody cares. Go do something useful. Get off of Threads. Quit this and go help where you're needed. Why do you have to be told to do this? Even non-human animals have a natural instinct. Are you that utterly stupid? What a broken little chump. It's laughable that I ever, for a single fraction of a second, could've thought of myself as a man.

Especially acting out like I am now. Men don't do this.

When ever am angry, this is my look, so I learn to walk away in most occasions or keep silent.

You are a smart man, and a true man.

That's how I learnt to live in this life..

!DOOK


You just got DOOKed!
@calebmarvel01 thinks your content is the shit.
They have 12/200 DOOK left to drop today.
dook_logo
Learn all about this shit in the toilet paper! 💩

You are respected and respectable, my friend. You deserve better than me around you. I hope I haven't made you worse off.

Smile 😊😊😊, please am not a perfect man.... As I've always said, Who am I to judge?

I hope I haven't made you worse off.

"Beauty is in the eyes of the beholder"

If you truly love someone, the are things you can count against him or her...

Love covers multitude of wrongs..

I hold nothing against any individual, I don't condemn so that I will not be condemn too..

I don't hate anyone but I only frown on the action and spirit behind that act.

Even so I believe I have wronged you. I will not forgive the multitude of transgressions. It was not one mistake, but an accumulation over time. There's a point at which no amount of love is enough, especially from a stranger. I should finish and close myself off from this account, if I had the least bit of decency. I don't think I do.

I think I'm okay these days, but I've had days where I think like this a lot. I won't claim I understand what @freecompliments is going through right now, but I've think like that sometimes, and I'm okay now:

https://inleo.io/threads/view/freecompliments/re-leothreads-2h33yvmxf?referral=freecompliments

I would normally comment how it's ok, but I'd be a hypocrite for doing so. Worth ignoring and laughing instead. Nothing I've said has been of value or truth since it's clearly not working. I hope I haven't hurt too many people in the process. I know some have been hurt. I can't forgive that fact.

hahaha 🤣🤣🤣🤣🤣..

At least this was a smart idea to contain my insanity instead of spreading and smearing this 💩 all over Threads for everyone to suffer

Fukin coalitions negotiation in the different local states makes rage and want to scream into the forest.

Hey, I feel related to that! !LOLZ

Chuck Norris once went on a bicycle ride
and accidentially won the Tour de France.

Credit: blumela
@logen9f, I sent you an $LOLZ on behalf of ahmadmanga

(1/2)
NEW: Join LOLZ's Daily Earn and Burn Contest and win $LOLZ

I can't wait to view this from Snaps, where I'll actually be able to see the whole thread, and wouldn't have had to wait 30 seconds for what I typed to show on the screen.

He is talking about when inleo referral system would end.

https://inleo.io/threads/view/milaan/re-falcon97-fyupcsej

What if I tell my referrals I would share the Leo tokens reward with them if they become active. Is that ethical?

https://inleo.io/threads/view/ahmadmanga/re-falcon97-yrngmrwe

Sure, why not. We can reward each other however we want. Sounds like a good way to help new users (that you refer). !BBH !DIY !DOOK


You just got DOOKed!
@pepetoken thinks your content is the shit.
They have 2/300 DOOK left to drop today.
dook_logo
Learn all about this shit in the toilet paper! 💩

@falcon97! @pepetoken likes your content! so I just sent 1 BBH to your account on behalf of @pepetoken. (2/100)

(html comment removed: )

@pepetoken just sent you a DIY token as a little appreciation for your comment dear @falcon97! Feel free to multiply it by sending someone else !DIY in a comment :) You can do that x times a day depending on your balance so:

Don't be shy - share some DIY!

You can query your personal balance by !DIYSTATS

The difference is in blogging style, reward models. It adds up to the needed motivation and self growth

Two different worlds with two different systems.

🧵/1

A new game feature rewards players when they upgrade leagues, encouraging higher-level play. Recently, I advanced to Gold One, receiving 15 reward cards as part of this update.

#threadstorm #outreach #splinterlands #game

🧵/2

Using potions to boost chances, I hoped for legendary or gold cards but received regular rewards. This addition benefits gameplay progress, motivating players to aim higher.

more to follow in post-

1/3🧵 It's not hard to stop and reflect on our purpose in life and the legacy we’re leaving for future generations. Looking at this daily routine that we all share, it seems like a sad, hectic, and uninspiring life. Do we really live just for this?

#threadstorm #outreach

2/3🧵 There’s a cycle that traps us in the pursuit of a better life, and there's nothing wrong with that, but it shouldn’t be our only purpose. Of course, each person has their own, and that’s a personal mindset. But I believe we can be better, leave a legacy of good examples to be followed, and transform, even if just a little, the world we live in.

3/3🧵 This is a bit of how I think, and I invite you to share your thoughts with me in the post below:

https://inleo.io/@michupa/what-really-matters-enpt-kbr

There are a sector of population who are even struggling to fulfill this basic priorities of life.

#freecompliments

$LEO rises with this price. #hive #leo #cent

From this level we can only expect this to rise. Blessed are those who are buying at this level.

#freecompliments

Get ready to captivate this Halloween!Are you in? 🔥

https://trynectar.ai/pixie

of to work I go, get the snow off the car first. brrrrrrr

Show us a picture; or stop lying !LOLZ

What do you get if you cross a bullet and a tree with no leaves?
A cartridge in a bare tree.

Credit: reddit
@bradleyarrow, I sent you an $LOLZ on behalf of ben.haase

(3/10)
Farm LOLZ tokens when you Delegate Hive or Hive Tokens.
Click to delegate: 10 - 20 - 50 - 100 HP

Here is the daily technology #threadcast for 10/28/24. The goal is to make this a technology "reddit".

Drop all question, comments, and articles relating to #technology and the future. The goal is make it a technology center.

Google

Gmail will now help you write an email on the web with AI

Google is expanding “Help me write” to Gmail on the web, allowing users to whip up or tweak emails using Gemini AI. Just like on mobile, users will see a prompt to use the feature when opening a blank draft in Gmail.

#newsonleo #technology #google #ai

Hi, @coyotelation,

This post has been voted on by @darkcloaks because you are an active member of the Darkcloaks gaming community.


Get started with Darkcloaks today, and follow us on Inleo for the latest updates.

Humans Are Evolving Right Before Our Eyes on The Tibetan Plateau

Humans living in the high altitudes of the Tibetan Plateau, where oxygen levels in the air are notably lower than where humans usually live, have changed in ways that allow them to make the most of their atmosphere. Their adaptations maximize oxygen delivery to cells and tissues without thickening the blood. These traits developed due to ongoing natural selection. Learning about populations like these helps scientists better grasp the processes of human evolution.

#science #evolution

The same thing goes on around lake titikaka the people there have no problems breathing and being very active!

Meta

7 tasks that Meta AI can help with on a daily basis

In addition to generating images, Meta AI helps in various everyday situations, such as creating to-do lists, carrying out research and much more. The features are available in the generative AI-powered virtual assistant of WhatsApp, Instagram and Facebook.

#newsonleo #technology #ai #meta

1. Create to-do lists and routine

To request a list of tasks for the day, the user can ask Meta AI through a prompt such as “Create a list of tasks to accomplish in 30 minutes”, or “From activities x, y, z, suggest a list of tasks to be completed in 2 days”.

2. Assistance with device configuration

Meta's AI can also assist users who are having difficulties with other devices through prompts such as: "How do I configure [device] for [function/task]?", for example.

3. Provide definitions and explanations

Both for simple everyday questions and for questions about school, work or college, it is possible to request commands such as: "Explain the meaning of a word/phrase", or "What is the relationship between concept 1 and concept 2?".

4. Suggest replies to messages

It is possible to ask Meta AI for indications of answers. The user can insert the prompt "Suggest an answer for this [question/situation]" or, within a WhatsApp conversation, activate the AI ​​with the command “@MetaAI” in the chat and request the text they need.

5. Assist in creating study plans and questionnaires

For those who need help organizing and dedicating themselves to their studies, it is possible to request study plans from Meta AI such as: “Create a study plan for [deadline/duration]", or “Create a questionnaire with 5 questions about [ subject]".

6. Generate project ideas

The user can request ideas for personal or professional projects. You can prompt with “Help me create a project that meets [need]" and "Develop ideas for a project with limited resources."

7. Indicate recipes and menus

To help organize your routine and provide meal ideas, users can use prompts for Meta AI, such as "Create a recipe with [fresh/seasonal ingredient]" or even "Help me develop a recipe without [prohibited ingredients] ]".

Question to the tech audience

What is needed for EU to catch up in the AI race?

It cant. The EU regulated itself out of contention. Trying to play catch up in this era is fool's play. Things are moving too fast.

The largest tech companies in the world are either in the US or China. What is the largest one in the EU? I dont know. Is it SAP?

We saw the EU screw itself on this end. Speed is the key and that isnt the EU's forte.

Is there no chance for the regulators to catch up? I mean otherwise we have resources in both manpower and capital.

No. The system is designed and built upon regulation. Everything is stifled there. There is no culture for success ala a Silicon Valley where innovation reigns.

It is the woe if a communist system. When the government is heavily involved, which the EU (as are national governments), it is curtains on innovation. It is really that simple.

China, has the CCP but they are innovation (along with stealing tech). Of course, that could change too since they are throwing the likes of Jack Ma in jail.

However, you only need to look at the Chinese automakers compared to the European (mostly German) to see how the latter is getting its ass kicked.

Will be interesting to see if Europes only startegy will be to be allied with the US.

For me, I hope we can start catching up. We need to start innovate again.

Well the US is dividing. The European thinking is penetrating certain aspect of the US mindset. However, this is not the case with people like Elon and those companies that left California.

I think we are going to see more innovation in the US shifting around.

Is a country like India or Japan also out of the game?

Most likely. Japan is falling behind. India has the same issue...it cant get it out of its own way.

The system is not set up to incentivize innovation. Instead it stifles it.

Yeah and sometimes its just easier to push something when its a dictator running the place.

How is Russia doing on the AI front?

Not even in the game as far as I can tell. Russia is a military force not so much a technological one.

So they arent really a player on the global technology stage.

Why OpenAI’s $157B valuation misreads AI’s future

OpenAI's $6.6 billion fundraise earlier this month was a statement about where AI will create value and who stands to capture it. Its investors are betting that AI is so transformative that the usual rules of focus and specialization don't apply - the first company to achieve AGI will win everything. However, there are many barriers to overcome before its goal can be achieved. This article looks at the challenges that OpenAI faces and where the most promising opportunities in AI for investors and startups lie. New technologies, no matter how revolutionary, don't automatically translate into sustainable businesses.

#technology #ai #openai #artificialintelligence

There is some validity to this article in my opinion, while also having a basis in nonsense.

For LLMs, the value is not in that layer but what is built on top. The LLM is basically a commodity since they all are essentially training on the same data.

This is where the social media entities have an advantage. It can bring features out to users via the existing operations. OpenAI doesnt have that. That means they are lacking the consistent (free) data flow along with the ability to easily direct services.

For example, Grok just rolled out image generation. This means X premium users can upload an image and get an explanation. What does OpenAI do with this feature?

That said, the major cost is in building infrastructure. That is where we are at.

Should JavaScript be split into two languages? New Google-driven proposal divides opinion

A proposal to split JavaScript into two languages has been presented to Emca TC39, the official standardization committee. The proposal argues that the foundational technology of JavaScript should be simple because security flaws and the complexity cost of the runtimes affect billions of users. New language features only benefit developers and applications that actually use those features to their advantage - adding them almost always worsens security, performance, and stability. The proposal suggests changing JavaScript's approach to one where most new features are implemented in tooling rather than in the JavaScript engines.

#technology #google #javascript #programming

EU auto biz slump throws IT’s growth plans off the track

The European automotive industry's slowdown has affected the growth of the auto engineering business for top software service providers in the first half of FY25.

Top IT giants Tata Consultancy Services (TCS), Infosys, and HCLTech reported softness in the automotive sector, particularly in Europe, during Q2. This was attributed to ongoing supply chain challenges and regulatory shifts.

#technology

The European regulatory push toward electric vehicles (EVs), which have lower margins, coupled with intense price competition from China, has dampened new car demand, prompting higher technology investments. The impact is visible in the financial results of major automakers like Volkswagen, Stellantis, Mercedes-Benz, and Porsche, which reported lower-than-expected profits.

Salil Parekh, CEO and MD of Infosys, noted a slowdown in Europe’s automotive sector, stating, "We have seen slowness in the automotive sector in Europe...The European automotive sector faces recent challenges, while discretionary spending remains under pressure. We see opportunities in supply chain optimization, cloud ERP, smart factories, and connected devices across various sub-verticals."

Similarly, HCLTech CEO C Vijayakumar remarked, “There is pressure in automotive, especially in Europe, reflected in our numbers this quarter and likely in the next as well.” Vijayakumar added that cost-cutting measures among some major clients have led to project cancellations.

Despite the automotive slowdown, IT leaders remain optimistic about manufacturing growth outside the automotive space. Wipro CEO and MD Srinivas Pallia highlighted opportunities in software-defined vehicles (SDVs) and cloud-based car solutions on the engineering front.

According to Pareekh Jain, founder and CEO of IT research firm EIIR Trend, the automotive tech business for Indian IT services firms comprises roughly 50-60% of manufacturing revenues—a significant vertical, contributing about 15% to India’s $250 billion outsourcing industry. This places India's automotive tech and engineering sector at approximately $20 billion. “The automobile industry has seen tailwinds over the past three years, but the momentum is shifting. Incumbent OEMs are facing challen ..

For the top six India-based companies—TCS, Cognizant, Infosys, HCLTech, Wipro, and Tech Mahindra—FY24 revenues totaled around $97 billion, with manufacturing revenue comprising approximately $13 billion.

Mid-sized, automotive-focused engineering firm KPIT Technologies experienced a more pronounced impact, with pressures expected to persist over the next two quarters. Europe and the UK, which represent over 40% of KPIT's revenues, showed a decline for the first time since the pandemic.

KPIT CFO Priyamvada Hardikar noted the challenges facing the mobility industry, particularly in the automotive sub-vertical, as it contends with regulatory changes, rising vehicle costs, and shifting consumer preferences. “In Europe, OEMs are facing financial turbulence...The financial situation of some U.S. clients also adds to this uncertainty,” Hardikar commented, adding that the company is working closely with clients to prioritize and adjust delivery strategies, which may lead to delaye ..

The lag effect became evident in technology service companies' July-September quarter results, as auto manufacturing growth declined. This overhang is anticipated to persist into the third quarter ending December. While the manufacturing segment remained stable for the top three players, Tech Mahindra, the fifth-largest, reported a 4% decline in its manufacturing vertical due to automotive sector softness. Analysts observe that headwinds led to a 0.3% quarter-on-quarter (QoQ) decline in auto ..

Tesla

Tesla Cybercab aparece pela 1ª vez “à luz do dia” e impressiona

O Tesla Cybercab, táxi autônomo sem volante ou pedais que Elon Musk apresentou ao mundo em um evento no início de outubro, apareceu pela primeira vez “à luz do dia” no último sábado (26).

#newsonleo #technology #tesla

O robotáxi foi a principal atração do “Frunk or Treat”, evento realizado na Gigafábrica da Tesla no Texas (EUA), que teve como principal objetivo mostrar para o público um pouquinho do que a montadora de Elon Musk está preparando para chegar ao mercado.

Essa foi a estreia do futurista modelo em um ambiente diurno, já que a festa em que apareceu pela primeira vez de maneira oficial foi realizada à noite e em um estúdio, o que acabou dificultando a visualização completa do táxi autônomo da Tesla.

Agora, embora tenha sido posicionado embaixo de uma tenda para se proteger do sol, o Cybercab finalmente exibiu suas linhas sem qualquer maquiagem ou segredo. E o que deu para notar é que, embora menos impactante do que a Cybertruck, por exemplo, o robotáxi autônomo também impressiona em seu design.

Quando o Cybercab será lançado?

Elon Musk não quis cravar com exatidão quando o Tesla Cybercab será lançado, mas projetou que o táxi autônomo poderá chegar oficialmente no mercado por volta de 2026, com preços abaixo dos US$ 30 mil — cerca de R$ 170 mil na conversão.

Segundo o bilionário empresário, a implementação do Cybercab reduzirá sensivelmente os custos do transporte público e, por conta disso, os investidores não deveriam ter qualquer dúvida a respeito do sucesso do novo empreendimento da marca.

Apple

Apple Intelligence will come to EU iPhones in April

Apple Intelligence has finally launched in US English, and if you’re in the EU, you’ll be able to use the new AI features on your iPhone and iPad starting in April, according to an Irish Apple newsroom post.

#newsonleo #technology #apple

When the features roll out to iPhones and iPads in the EU, they’ll include “include many of the core features of Apple Intelligence, including Writing Tools, Genmoji, a redesigned Siri with richer language understanding, ChatGPT integration, and more,” Apple says in the post.

However, if EU users want to get a taste of Apple Intelligence sooner, they can try the initial features on their Mac that are now available with macOS Sequoia 15.1. That first batch of features includes AI-powered writing tools, improvements to Siri, and email summaries in Mail.

Apple also announced that Apple Intelligence will launch in localized English in Australia, Canada, Ireland, New Zealand, South Africa, and the UK in December. Presumably, they’ll be included with iOS 18.2, which is set to add a bunch more Apple Intelligence features like Siri’s ChatGPT upgrade.

I think that unfortunately this type of situation could become common as more people have access to AI.

#technology #ai

https://inleo.io/threads/view/coyotelation/re-leothreads-36khwmyz6?referral=coyotelation

I havent read into the story but we live in a world where people do not like to take accountability for anything. It is awful that people are committing suicide, especially at such a young age.

However, to state that it is AI's fault seems like a stretch.

I wonder what kind of nonsense this kid was filled with as he grew up.

Alright, Task. Quickly summarized, the boy managed to create a kind of "Daenerys Targaryen" and that as their interaction progressed, he ended up falling in love with this character created by AI.

His mother accuses that the company AI and Google are to blame. In my opinion, they are not to blame for anything. The site is prohibited for anyone under 18 years of age.

But I mentioned earlier that as more sensitive people have access to AI, this kind of thing can happen. Just see how people are currently dealing with some problems in their lives.

Depression is something that many are not taking seriously and this mother was not as present in this boy's life.

It is not the AI's fault, in the same way that a gun is not to blame for a murderer shooting at someone.

Seems like a common story. Sad but far too common.

True, unfortunately yes.

From Reddit:

I often think about this. Soon the majority of jobs will likely be redundant (including mine). What are your plans/tips for how to prepare, and what to do when it happens?

Hopefully there are many years before this happens, but want to start preparing now, just in case it happens sooner.

A question many should start to think about at least to be a little prepared.

I would love to work somehow with a project on Hive.

This is the future. People need to get in the mindset of asset accumulation.

Then work on building the value of it.

Do you think Hive is actually a stepping stone into the mindset to accumulate assets?
Hive is kinda center around building and maintaing your account.

Do you think Hive is actually a stepping stone into the mindset to accumulate assets?

No. It has nothing to do with Hive. It is dependent upon the mindset that people choose to have. We see few on Hive who have this mindset.

Hive is a tool and for those who have the mindset, it is a way to start accumulating assets.

But few take ownership. They are not in Web 3.0.

Right I get its just a tool, but its not in very many places you can get these opportunities in irl for many people.

Sure they can start own businesses, but that might come with big financial risks. Here you only loose time, which can be important to ofc.

To start becoming more self reliant isnt such a bad idea either. Growing crops and learn the basic skills to survive with the government.

There is a major portion of people with that mindset. We will see how it unfolds.

What do you see yourself doing on Hive, do you have something in mind?

Yeah I have 2 projects I would love to build someday. Dont wanna share the ideas yet though

looking forward to hear more!

Good luck building them, hope we will know when you launch them.

The earlier the preparation the better. I have been looking into the business world to spot sectors that AI will work with rather than rival out.
#freecompliments #gmfrens

I purchased some land and am building a homestead. Between solar, animals, fruit trees and a huge garden I plan on being as self sufficient as possible. Because the rich sure as shit aren't going to help those that lose their jobs to technology.

People are so generationally brainwashed by the way society /civilization is structured presently that they can’t even imagine what life could be without being a labor force … it’s kinda sad to me that people are more afraid of NOT HAVING A JOB and less excited about possibilities of FINALLY HAVING A FULL UNINHIBITED LIFE . When society restructures, money , politics, status, materialism will also all have to change … we’ll hit a new era , similar to the ways we have in the past for thousands of years. We will all find the new normal and have a hand in creating that.

The global turmoil and unrest presently is the beginning of these changes, it will all be gradual , but I believe that one day everyone will just have what they need and live how they wish within the rules of the new society. Then humanity will finally be free to figure their true purpose, to ponder higher thoughts , to begin to evolve our intellect and spiritual beings in ways we have never been able to before free from the shackles of the rat race , the stress of work , the hardship of bills etc - personally I’d rather focus on the that potential rather than “what am I ever going to do”

My personal plans? I'm hoping to be retired by then and live a comfortable if frugal lifestyle off my pension and savings.

My job is absolutely one that can likely be easily automated in the next five or ten years, so I'm fortunate in my timing to (hopefully) get out while the getting is still good.

Start growing your own food, and collecting rainwater, and power… hopefully human needs will be made human rights without a fight, but I wouldn’t put all your eggs in one basket.

Fattening myself up so they choose to keep me as a battery

I think we might have to worry about pensions and savings (and, really, a lot of things that we take for "normal" right now) if the transition to automation is disruptive to the economy. (And wouldn't it have to be?)

It already happened to me, so I got a new job. Retraining from scratch. And I guess that's the game until my back gives out and they toss me down a hole.

Unfortunately, this is out of reach for most people due to costs and limited income.
The price of land, a house, solar etc are rapidly growing.
Add substantial land for farming, grazing, barn, storage. Some farming equipment + storage. Land plots that size alone are at 300-500k already.

Eve if i chose a countryside location, I'd be looking at a 1+ million € for a functional self-sufficient living. I won't save that much before I'm way too old and i'm at a high paid job.

I wouldn’t prepare for that and get used to the idea of having to keep going to work. Jobs will change but technology won’t make a majority of unemployed.

Technology replaces jobs all the time. It just makes people more efficient. One person can do the job it used to take 2, 3, or 10. But more jobs became available. Like you said, there are 8.2 billion people now. And now there are billions of jobs. But there were only 4.8 billion people 50 years ago. So half as many jobs (roughly). And look at all the technology that came into place in those 50 years, making people more efficient, yet still there are twice as many jobs now (roughly)

The trick is to be useful and always be willing to learn the new stuff. Don't get stuck in a rut saying things like "we never needed computers back when I was working in " those people get replaced because they refuse to be fluid.

Haven't you heard? It's going to be utopia where basic income will let us do what we want. /s

Well, there is a point to UBI.

Right now, we have the system set up where the majority of people work to live.

That means your average person's ability to consume goods, spend money, and be an overall contributor to the economy depends on them being employed the majority of the time.

If you break that connection, you could have major issues. The numbers can't go up if people can't buy things. So, if entire sectors automate rapidly, that could get... interesting.

It's not a problem with technology, but rather that we created a lot of "filler" jobs for the rapidly increasing population.

It was a job inflation. We printed more jobs just to have more jobs. Not that these jobs were actually needed and important.

And since they are so basic, technology can now completely take over all of them at low cost. Technology is bursting that bubble.

I worked for 18 years on the copper telephone network, people who really know the work are increasingly rare. I'm taking advantage of my layoff to try to move into electricity and fiber

Save money to buy ai robots and start your full automated bakery or something like that.

Hard to say, sometime in the past the ones that had the job of "knocker upper" was replace by alarm clocks, My guess is that they got dispersed back into the job market the same way we all will be now, biggest difference, there are no more jobs, or maybe there are new jobs that we do not know about now.

in every automated work process, there is a human, making it happen. evolve with the technology, to control the technology.

i know that the vision of a utopian society has humanity running around being care free beings, following our dreams and desires BUT...

capitalism will never allow it.

that being said, the world will be a used up ball of dust, before robots replace humans. take self driving cars, for example. there will never be a safe self drive system until ALL vehicles are on the same closed network. nostalgia, competition, and the loss in capitol funds will never see it happen in our lifetime.

Learn how to maintain the robots that take your job

Unfortunately, it is going to get very rough for a lot of people because the Gov is going to drag their feet for a while before coming up with a mid-tier or lesser plan and implement it. So I would suggest doing the following now while you have income coming in: Pay off debts, have diverse investments in physical and paper assets, save money, figure out something you can do to make income if you lose your job.

Tech replacing jobs is nothing new - it has been happening throughout history. Guys making stone axes were put out of work by the bronze axe makers. Saddlers and farriers became a niche industry when the motor car became popular.

I work in graphics and I trained before PCs became widely used. Everything I learned is completely irrelevant now. The PCs I use and the software changes completely every 5 years. I keep learning. Now we're supposed to be scared of AI taking our jobs. Guess what - I'm learning how to use AI. It's just a new tool like so many new technologies. I've never been out of a job in 45 years.

It happens periodically. I had a job as a security guard years ago (circa 2008) and one of the older guys I worked with lost his job years before due to digital photography. There was no demand for people who can develop film anymore so he retrained to be armed security. Funny enough, developing film has become a niche market now so maybe he’s back to doing it on the side again.

Point is, specific jobs disappear but human labor is a very long way from becoming obsolete.

Governments and experts will have to radically transform how our economies currently work. It's already straining under this transition. Automation stopped being a boon for the economy 50 years ago. We will soon be at the breaking point... much like climate change however folks won't seriously consider it (including the citizens) until we are in crisis. No personal planning can avoid economic crisis. Being a hermit/prepper I guess, but majority of folks don't have that option even if they wanted it.

The argument against this is the lump of labor fallacy

https://en.m.wikipedia.org/wiki/Lump_of_labour_fallacy

The cost of a good is a function of its scarcity. Once AI makes most things cheap, we will find new shiny baubles to chase after, and the fact that they are expensive will be what makes us want them more (their high cost allows us to parade our social stature in front of others as we try to climb the social hierarchy). Since humans remain the rate limiting factor in much production, things that are authentically made by humans, even if they are not necessary, will be what we value and spend money on to maintain our sense of prestige. It’s frankly absurd, but it keeps us on the hamster wheel. Just look around you-how much of the junk you have do you REALLY need? We’ll just find new things we think we need as the the things we have get abundant and cheap due to AI

You are severely overestimating technology and underestimating how useful you are in your job. “Soon,” no. “Majority of jobs,” no.

You’re wasting your time with doomsday prep. But if you want to prepare, invest for retirement and pursue higher education. The higher skilled you are, the more irreplaceable you are.

Most likely, soon… blah blah. I have no doubt you will get left behind. Go read a book

Learn to code. Provided that AI doesn’t become our overlords (in which case we will have worse problems) then knowing how to write code to leverage AI will set you apart. The world is changing…similar to what happened with the dot com boom the people most capable of adapting will be the ones who secure their future. Unfortunately, not everyone can do this.

I think this is getting blown out of proportion. Keep in mind, assuming all/most jobs were replaced, there would be no one to buy said products. If no one is buying, then no one is making either.

I feel like the small business won’t be able to afford that stuff until it becomes much widely available.

They won't replace all jobs. Robots don't pay taxes. Retrain and do something else. I'm planning on being a robot cuddler 4 pay.

It won't replace our jobs it will make our jobs more productive

Lot of jobs will be created to support those new techology.

You need people to create and assemble robots, assembly lines and other stuff like that.

You need people to create good AI.

I told this to people before who have such a concern about AI and automation replacing their job.

Unfortunately, hiding for the past where people could work at honest days worth of work and go home at the end of the day as quickly disappearing. It’s very dystopian, but we’re not going to go back to that. Unless of course, there’s some kind of catastrophe that completely destroys modern living.

Unfortunately, you have to look at it like a machine would, evolve and skill up or become obsolete. Inflation is going up at such a rate that you will quickly be priced out of affordability and end up in poverty, if you do not do anything about it.

I’m almost 40 and I have had my living made entirely in customer service, things have changed drastically in the past 20 years that I have been working in this field. There is now a lot of AI and automation and scripts being used for basic customer service and technical support needs that they have slashed the pay rate in half for my level of expertise.

I have went back to school to diversify my skill set to remain relevant. Not exactly what I wanted to do, but it is what it is. I tell the same thing to a friend of mine who makes his living as a loader. All he does is pick up a load from somewhere and move it to another place. He is hoping to retire from the concrete factory that works, but he’s got another almost 30 years. I told him he should not be so certain that his job will continue to be there, but he has blown me off really. it would be all too easy for them to make a machine that does his job.

I too wanted a simple job that I could go to, put in my 8 to 12 hours, and then go home without giving it any further thought and be able to afford living, but that is not the case. So until the economy and the government evolves to Support the basic public with something like a more universal welfare program, you were going to be out of luck if you don’t try to stay above the wave now.

In the past, technology hasn't simply removed jobs, but created more. The sewing machine put small seemstresses out of work but created factory jobs. The mechanization of farm equipment removed the need for so many farmers so people flocked to cities to work for new industries. Generally, technology takes the jobs we hate to do and gives us the freedom to do the jobs we like to do, like problem solving and services and travel industries and art and entertainment. Sure, AI may be a good resource in the classroom, but are parents really going to be ok with there being no human teachers to foster social connectionand developemyn? Who will make sure the machines are running properly and fix any bugs or errors? I think humans will always be needed in the age of technology, and if not we will desire to do work that flourishes our communities.

Technology has been replacing jobs for hundreds of years and yet somehow everyone still has jobs.

Sure you need to be aware of advances that may impact you / your particular job but people can re-train or switch in that scenario if they've got a bit of a financial buffer or have been proactive about learning other skills etc.

Become as adaptable as possible. This is how evolution works and what makes mankind so great

honestly if mechanized labor replaced human labor, you'd think that humans could all live like we were meant to, without having to debase ourselves for money to survive. somehow though in this timeline it seems likely that the billionaires would try to keep everyone in poverty even harder than they already do... at a certain point, it's like, they still live on earth, you know? like they have an address

A job is just an area of responsibility and a collection of associated tasks. AI will start to make several tasks easier, and let one person have responsibility over more of those tasks, but certainly won't be eliminated entirely.

Use AI to get better at your job, and you'll probably be safe. If not, then use AI to retrain and upskill for a better job.

This is an age of massive opportunity, but only for those who act accordingly.

Look for where the new ones come from. They asked the same question as the industrial revolution started to throw people off the farms and into the factories. And again, the world was ending with the steam age, the motor car, computers, the internet and now AI. If you want a job, you will get one, just keep learning

Retrain or start a company controlling the AI bots to do your old job, you focus on the human elements of it like getting suppliers or signing up customers. It really depends where you came from and where you want to go in your work life.

We become more like the Grey's portrayal. Since we will not be burdened by physical work, or wars. Then our brains will start growing in size as we work the mind, and spirit. Then some of us will have discovered time travel and come back to see our warring ways and accidentally trigger God stories and area 51.

Maybe the great flood was the result of time travel triggering a new time line... who knows hahaha.

tech is movin wayyy faster than any of us expected. one way to prep is to focus on skills that machines aren’t great at yet: creative stuff, humancentered work like counseling or social work, and stuff that needs realworld adaptability. also think about learning how to work with tech, like getting into AI tools or automation so you're the one managing the robots instead of gettin replaced by ‘em

New opportunities are coming when the older jobs get replaced

Don’t worry with AI robots, drones, it’s not like we aren’t going to face an extinction level threat, you know climate change would be the kinder death

Becoming more close with nature. Self sufficiency is probably becoming more important and culture as well (personally speaking). I don't like it completely but when finally all or most jobs become redundant we may get into conflict of harmonize each other, how that phase goes will probably decide how humanity progresses

Founders and VCs back a pan-European C corp, but an ‘EU Inc’ has a rocky road ahead

It's become a common refrain in political discourse: Europe needs to take radical action to remain competitive.

It’s become a common refrain in political discourse: Europe needs to take radical action to remain competitive. On the long list of potential reforms, one that’s gaining particular traction is a new, EU-wide corporate status for innovative companies.

Known (somewhat obscurely) as the “28th regime,” the innovation is being billed as Europe’s answer to a Delaware C-Corp, and would add to what already exists in the EU’s 27 member states. It is now supported by an entrepreneur and VC-supported grassroots movement that also brought along the much more palatable name of “EU Inc” — and some unexpected momentum. Launched on October 14, the EU Inc petition has already attracted some 11,000 signatures.

Meta releases an 'open' version of Google's podcast generator

Meta has released an 'open' implementation of the viral generate-a-podcast feature in Google's NotebookLM called NotebookLlama.

Meta has released an “open” implementation of the viral generate-a-podcast feature in Google’s NotebookLM.

#meta #ai #notebook #llama #podcast

Called NotebookLlama, the project uses Meta’s own Llama models for much of the processing, unsurprisingly. Like NotebookLM, it can generate back-and-forth, podcast-style digests of text files uploaded to it.

NotebookLlama first creates a transcript from a file — e.g. a PDF of a news article or blog post. Then, it adds “more dramatization” and interruptions before feeding the transcript to open text-to-speech models.

The results don’t sound nearly as good as NotebookLM. In the NotebookLlama samples I’ve listened to, the voices have a very obviously robotic quality to them, and tend to talk over each other at odd points.

But the Meta researchers behind the project say that the quality could be improved with stronger models.

“The text-to-speech model is the limitation of how natural this will sound,” they wrote on NotebookLlama’s GitHub page. “[Also,] another approach of writing the podcast would be having two agents debate the topic of interest and write the podcast outline. Right now we use a single model to write the podcast outline.”

NotebookLlama isn’t the first attempt to replicate NotebookLM’s podcast feature. Some projects have had more success than others. But none — not even NotebookLM itself — have managed to solve the hallucination problem that dogs all AI. That is to say, AI-generated podcasts are bound to contain some made-up stuff.

BP Walks Back Green Targets Amid Market Realities

  • BP has reversed its commitment to cut oil and gas production by 40% by 2030.

  • The energy transition remains challenged by economic realities, prompting BP and other major oil companies to scale down transition plans.

  • BP's pivot, along with similar moves from other oil majors, highlights the industry’s continued reliance on hydrocarbon.

#bp #green #energy #technology #newsonleo

In February 2020, then-brand-new chief executive Bernard Looney told the world that one of the oldest and biggest oil companies in the world was going to become a net-zero company by 2050. To achieve this, it would slash its oil and gas production by 40% by 2030.

Four years and one major crisis later, BP is abandoning not only the original production cut target of 40%, but also a revised, lower target of 25%. BP, in other words, is returning to its roots. And commodity investors who are not paying attention should be—and so are transition investors.

“This will certainly be a challenge, but also a tremendous opportunity. It is clear to me, and to our stakeholders, that for BP to play our part and serve our purpose, we have to change. And we want to change – this is the right thing for the world and for BP,” Bernard Looney said back in 2020 when he announced the company’s new course.

There was much enthusiasm in the climate activist world when that statement was made. Activists were not satisfied but did concede that it was a step in the right direction. Investors took the news differently—BP’s shares dropped precipitously immediately following the announcement of the newly charted course before rebounding later in the year.

Then came the pandemic, decimating demand for energy and leading to a price slump that BP at the time seemed to believe the industry wasn’t going to recover from, because, it said in one of its latest world energy outlook editions, global oil demand had peaked back in 2019 and it was never going to go back to those levels. BP still believed it was on the right track with its net-zero plans and a 40% cut in oil and gas production by 2030. And then it was 2022.

Oil demand had been on the rebound ever since the lockdowns began to be phased out. When China joined the Party of ending lockdowns, the demand rebound really took off. The war in Ukraine took that momentum and added to it supply security fears for a price rally that had not been seen in years.

The rally resulted in energy companies becoming the best performers in the stock market, overtaking Big Tech, and in record profits, which in turn led to fatter dividend payouts and massive stock repurchases. It also led to a reconsideration of some of Big Oil’s transition plans. In BP’s case, the latest stark reminder that the world still runs on hydrocarbons prompted the company’s senior leadership to abandon plans to cut its oil and gas production by even 25% by 2030.

All these developments also made investors think again—about energy transitions and the security of energy supply. It made investors think so much that pro-transition outlets are sounding an alarm about oil companies being unserious about the transition and, worse, unclear about the direction of their business, which should make investors cautious.

“A decarbonizing economy threatens the fossil fuel industry’s core business model, and the sector does not seem to be offering a cohesive and consistent plan for navigating this changing world,” the Institute for Energy economics and Financial Analysis said in a recent report. The report zeroed in on the latest BP news about the U-turn on oil and gas production cuts, suggesting that BP basically had no idea what it wanted to do with its future, and this should make investors nervous about the whole oil and gas industry.

That criticism certainly has a lot of merit in the context of a business world that is firmly on the way to a cleaner, greener energy future because the economics of such a future make sense. The actual business world in which BP and all other companies are operating, however, is different from that vision.

In it, the economics of the energy transition, as envisioned by its advocates and proponents, do not always make sense—which is why BP and other companies are abandoning their initial ambitious targets made, one might say, in the heat of the moment, following years of activist pressure that was warmly embraced by politicians in decision-making positions.

However, once these companies realized their transition efforts were not paying off, they pivoted. One might call it a lack of a “cohesive and consistent plan.” On the other hand, one might call it flexibility in the face of a reality that has proven different than hoped for. In addition to the news about BP abandoning its production cut target for 2030, the company was also reported to be considering reducing its exposure to offshore wind at a time when fellow supermajor Shell was also dialing back its transition ambitions and another fellow supermajor, TotalEnergies, just announced a $10.5-billion oil and gas development in Suriname.

The energy industry then appears to have a pretty clear view of the future. Hydrocarbons remain the energy source most widely used on the planet. Their alternatives do not seem to be living up to the hype. Therefore, Big Oil is shrinking its transition ambitions in favor of the business that has been proven to be profitable—for the companies and their investors. Sometimes, it really is as simple as that.'

Google plans to announce its next Gemini model soon

Google is aiming to release its next major Gemini model in December. Gemini 2.0 will be widely released at the outset as opposed to being rolled out in phases. While the model isn't showing the performance gains experts had hoped for, it will likely still have some interesting new capabilities. It appears that the top AI developers will continue to race to release ever-bigger and more expensive models even as performance improvements start leveling off.

#technology #google #ai #gemini

Award-Winning Image Reveals a Hidden Culprit Behind Alzheimer's

A neuroscientist at Augusta University has captured the precise moment brain tumor cells from mice interact in images by staining cellular components to reveal disruptions in support and transport structures. The research revealed how disruptions in a protein linking two cytoskeleton components together result in damage to the transport system, similar to what is seen in neurodegenerative diseases. Restoring normal cytoskeleton actin and myosin levels allowed the cells to transport their components normally again. The study shows how scientific imaging can help expose biological mysteries.

#technology #health #neuroscience #alzheimers

Elon Musk's xAI adds image understanding capabilities to Grok

Elon Musk-owned xAI has added image-understanding capabilities to its Grok AI model.

Elon Musk-owned xAI has added image-understanding capabilities to its Grok AI model. This means that paid users on his social platform X, who have access to the AI chatbot, can upload an image and ask the AI questions about it.

#grok #image #multimodal #x #socialmedia #ai #newsonleo #technology

In a separate post, Musk said that Grok can even explain the meaning of a joke using the new image understanding feature. He added that the functionality is in the early stages — suggesting it will “rapidly improve”.

In August, Musk’s AI company released the Grok-2 model, an enhanced version of the chatbot which included image generation capabilities using the FLUX.1 model by Black Forest Labs. As with earlier releases, Grok-2 was made available for developers or premium (paying) X users.

At that time, xAI said a future release would add multimodal understanding to Grok on X and to the model it offers via developer API.

Grok may soon also understand documents, per a Musk reply to user who criticized the model for not being able to handle certain file formats (such as PDFs). “Not for long,” Musk responded, claiming: “We are getting done in months what took everyone else years.”

The WordPress vs. WP Engine drama, explained

This story has been updated throughout with more details as the story has developed. We will continue to do so as the case and dispute are ongoing.

The world of WordPress, one of the most popular technologies for creating and hosting websites, is currently embroiled in a heated controversy. At the center of the dispute are WordPress founder and Automattic CEO Matt Mullenweg and WP Engine, a hosting service that provides solutions for websites built on WordPress.

#wordpress #lawsuit #trademark #newsonleo

The controversy has also led to an exodus of employees from Automattic. On October 3, 159 Automattic employees who did not agree with Mullenweg's direction of the company and WordPress overall took a severance package and left the company. Almost 80% of those who left worked in Automattic's Ecosystem/WordPress division. On October 8, WordPress announced that Mary Hubbard, who was TikTok U.S.'s head of governance and experience, would be starting as executive director. This post was previously held by Josepha Haden Chomphosy, who was one of the 159 people leaving Automattic.

Hi, @taskmaster4450le,

This post has been voted on by @darkcloaks because you are an active member of the Darkcloaks gaming community.


Get started with Darkcloaks today, and follow us on Inleo for the latest updates.

The core issue is the fight over trademarks, with Mullenweg accusing WP Engine of misusing the "WP" brand and failing to contribute sufficiently to the open-source project. WP Engine, on the other hand, claims that its use of the WordPress trademark is covered under fair use and that Mullenweg's actions are an attempt to exert control over the entire WordPress ecosystem.

The controversy began in mid-September when Mullenweg wrote a blog post criticizing WP Engine for disabling the ability for users to see and track the revision history for every post. Mullenweg believes this feature is essential for protecting user data and accused WP Engine of turning it off by default to save money. In response, WP Engine sent a cease-and-desist letter to Mullenweg and Automattic, asking them to withdraw their comments.

The company claimed that Mullenweg had said he would take a "scorched Earth nuclear approach" against WP Engine unless it agreed to pay a significant percentage of its revenues for a license to the WordPress trademark.

Automattic responded with its own cease-and-desist letter, alleging that WP Engine had breached WordPress and WooCommerce trademark usage rules. The WordPress Foundation also updated its Trademark policy page, calling out WP Engine for confusing users and failing to contribute to the open-source project. Mullenweg then banned WP Engine from accessing the resources of WordPress.org, which led to a breakdown in the normal operation of the WordPress ecosystem. This move prevented many websites from updating plug-ins and themes, leaving them vulnerable to security attacks.

WP Engine responded by saying that Mullenweg had misused his control of WordPress to interfere with WP Engine customers' access to WordPress.org. The company claimed that this move was an attempt to exert control over the entire WordPress ecosystem and impact not just WP Engine and its customers but aLL WordPress plugin developers and open-source users.

The controversy has had a significant impact on the WordPress community, with many developers and providers expressing concerns over relying on commercial open-source products related to WordPress. The community is also asking for clear guidance on how they can and cannot use the "WordPress" brand. The WordPress Foundation has filed to trademark "Managed WordPress" and "Hosted WordPress," which has raised concerns among developers and providers that these trademarks could be used against them.

On October 3, WP Engine sued Automattic and Mullenweg over abuse of power in a California court. The company alleged that Automattic and Mullenweg did not keep their promises to run WordPress open-source projects without any constraints and giving developers the freedom to build, run, modify, and redistribute the software. Automattic responded by calling the lawsuit meritless and saying that it looks forward to the federal court's consideration of the case.

In conclusion, the controversy between Mullenweg and WP Engine has raised important questions about the control and governance of the WordPress ecosystem. The dispute has also highlighted the need for clear guidance on how to use the "WordPress" brand and the importance of transparency and accountability in the open-source community. As the controversy continues to unfold, it remains to be seen how it will impact the WordPress community and the future of the platform. One thing is certain, however: the battle for control and trademarks will have far-reaching consequences for the entire open-source community.

Filigran secures $35M for its cybersecurity threat management suite

Paris-based startup Filigran is fast becoming the next cybersecurity rocketship to track: The company just raised a $35 million Series B round

Paris-based startup Filigran is fast becoming the next cybersecurity rocketship to track: The company just raised a $35 million Series B round, only a few months after it raised $16 million in a Series A round.

#filigran #newsonleo #technology #funding

Filigran’s main product is OpenCTI, an open-source threat intelligence platform that lets companies or public sector organizations import threat data from multiple sources, and enrich that data set with intel from providers such as CrowdStrike, SentinelOne or Sekoia.

The open-source version of OpenCTI has attracted contributions from 4,300 cybersecurity professionals and been downloaded millions of times. The European Commission, the FBI and the New York City Cyber Command all use OpenCTI. The company also offers an enterprise edition that can be used as a software-as-a-service product or hosted on premises, and its clients include Airbus, Marriott, Thales, Hermès, Rivian and Bouygues Telecom.

Mutually Assured Destruction

Does Arm's latest move – canceling a Qualcomm license – imply they're willing to take the very risky step of pushing this lawsuit all the way to...

OMG: The Arm vs. Qualcomm legal fight took a nasty turn last week, with Arm reportedly canceling Qualcomm's license to use Arm IP. This news has the makings of some scary headlines, but we think the immediate effects are likely minimal. That being said, it opens up more serious questions about what Arm aims to achieve here and how far they're willing to go to do so.

#amd #qualcomm #chips #technology #newsonleo

Does Arm's latest move – canceling a Qualcomm license – imply they're willing to take the very risky step of pushing this lawsuit all the way to a jury trial? At the most basic level, this lawsuit is essentially a contract dispute: Qualcomm pays one rate, and Arm thinks Qualcomm should pay a different, higher rate. But this cancellation clearly implies that Arm could cause deeper problems for Qualcomm, should they choose to.

Arm's Cancellation of Qualcomm License: A High-Stakes Gamble with Uncertain Consequences

In a surprise move that has sent shockwaves through the tech industry, ARM Holdings, a leading provider of semiconductor intellectual property, has cancelled its license agreement with Qualcomm, a major customer and one of the largest chipmakers in the world. The sudden cancellation has left many wondering about the motivations behind this drastic decision, which has already had a significant impact on the global chip supply chain and Arm's relationships with its customers.

At first glance, the cancellation appears to be a classic pre-trial maneuver aimed at gaining negotiating leverage in the ongoing lawsuit between Arm and Qualcomm. However, the move has backfired, with Qualcomm's stock barely budging, while Arm's stock plummeted almost 7%. This unexpected reaction raises questions about the effectiveness of Arm's strategy and the potential consequences of this high-stakes gamble.

One of the primary concerns is the impact on the global chip supply chain. Qualcomm is one of Arm's largest customers, and canceling their license agreement could lead to a significant disruption in the market. If Qualcomm is unable to ship chips, customers such as Apple would be forced to halt production, causing widespread shortages and economic losses.

This scenario would not only harm Qualcomm but also Arm, as the company relies heavily on its customers' success. The cancellation could also lead to a domino effect, with other chipmakers and customers struggling to find alternative suppliers, further exacerbating the disruption.

Moreover, Arm's cancellation of the license agreement may be seen as a hollow threat, as Qualcomm is unlikely to be severely impacted by the loss of this agreement. Qualcomm has a diverse portfolio of customers and a strong balance sheet, allowing it to weather any potential disruptions.

This lack of leverage may lead Qualcomm to view Arm's threat as a bluff, rather than a genuine attempt to negotiate a settlement. As a result, Qualcomm may be less inclined to compromise, leading to a prolonged and costly legal battle.

A more pressing concern is the potential for Arm to take this lawsuit to trial. If Arm is willing to risk the consequences of a cancelled license agreement, it may be seeking a legal victory that would provide a strong precedent for future business model and pricing changes. While this outcome could be beneficial for Arm, it would also come with significant risks, including the possibility of an unfavorable verdict or a lengthy and costly legal battle.

The uncertainty surrounding the lawsuit and its potential outcomes is further complicated by the fact that neither side has a clear advantage. The industry experts who have studied the case closely are still unsure about who is in the right, and the discovery process is likely to reveal more information that could shift the balance of power.

The cancellation of the license agreement also sends a concerning message to Arm's other customers. If Arm is willing to take such drastic action in a dispute with a major customer, what does this say about its commitment to its other partners? The tech industry is built on relationships and trust, and Arm's actions may erode the confidence of its customers and partners. This could lead to a loss of business and revenue, as customers seek alternative suppliers and partners.

In conclusion, Arm's cancellation of the Qualcomm license agreement is a high-stakes gamble with uncertain consequences. While the company may be seeking to gain negotiating leverage, the move has backfired, and the potential risks to the global chip supply chain and Arm's relationships with its customers are significant. As the lawsuit continues to unfold, it is essential for both parties to consider the long-term consequences of their actions and work towards a resolution that benefits aLL parties involved. The tech industry is built on collaboration and trust, and Arm's actions may have far-reaching and devastating consequences if not resolved promptly and amicably.

Google is working on an AI agent that takes over your browser

Google's Project Jarvis will be shown off as soon as December, when it releases the next version of its Gemini LLM, reports The Information.

#gemini #google #browser #ai #technology #newsonleo

Interesting... not the first company I see re-working the concept of a browser

The Rise of AI Agents

The article discusses the development of AI agents, which are computer programs designed to perform tasks autonomously. These agents are becoming increasingly sophisticated, allowing them to interact with humans and perform tasks that were previously the exclusive domain of humans.

Google's Project Jarvis

Google's Project Jarvis is a specific example of an AI agent designed to automate everyday web-based tasks. Jarvis is a Chrome-based browser extension that uses AI to take screenshots, interpret information, and perform actions. Users can command Jarvis to perform a range of tasks, from booking flights to compiling data.

The article notes that Jarvis is optimized for Chrome, which means that it will only work on Chrome-based browsers. However, the potential benefits of Jarvis are significant, as it could make AI tools more accessible to a broader audience, including those without prior experience with AI development.

Anthropic's Claude LLM

Anthropic's Claude LLM is another example of an AI agent designed to automate tasks. Claude is a large language model that can take limited control of a PC, allowing users to grant it access and control over various tasks. Claude's capabilities include tasks such as filling out forms, planning outings, and building websites.

The article notes that Claude is still considered "cumbersome and error-prone," but its potential to democratize AI access cannot be overstated. Claude's ability to learn and adapt to new tasks makes it a promising example of the potential of AI agents to become more useful and accessible to humans.

The Dark Side of AI-Driven Control

However, the development of AI agents like Jarvis and Claude LLM also raises significant concerns about the risks of AI-driven control. The most pressing issue is privacy, as AI agents may be able to access sensitive information and take screenshots of user activity.

The article notes that Microsoft's Recall is an example of an AI system that takes screenshots of everything being done on a PC, which raises uncomfortable questions about the boundaries of digital surveillance. This concern is mirrored in the backlash against Google's Project Jarvis, which some see as an infringement on user privacy.

The risk of AI Making Mistakes

Another concern is the risk of AI systems making mistakes or acting in ways that harm users. AI systems are prone to errors, which can have serious consequences, particularly in high-stakes applications like finance or healthcare.

The Need for Regulation

Given the risks associated with AI-driven control, there is a growing need for regulatory frameworks to ensure accountability and protect users. This includes developing guidelines for the development and deployment of AI systems, as well as implementing safeguards to prevent errors and ensure user safety.

A Shift in Corporate Culture

The development of AI agents like Jarvis and Claude LLM is also having a significant impact on corporate culture. Google's decision to drop its famous "Don't be evil" motto from its corporate code of conduct is a telling sign of the times.

As AI agents become increasingly sophisticated, the boundaries between human and machine are blurring. The question of what it means to be "evil" in the digital age is no longer a straightforward one. companies like Google and Anthropic are pushing the boundaries of what is possible, but they must also consider the implications of their actions for human society.

The Future of Human Agency

Ultimately, the rise of AI agents like Jarvis and Claude LLM presents a complex challenge for humanity. While the potential benefits of increased accessibility and convenience are undeniable, the risks of losing control to machines must be carefully considered.

As we navigate this uncharted territory, one thing is clear: the future of human agency is no longer a given. The consequences of our actions will be felt for generations to come, and it is up to us to ensure that the development of AI agents serves the interests of humanity as a whole.

When you think about it, we do not need Hollywood anymore.

Animal tracking made affordable with $7 Bluetooth beacons, thanks to Apple's Find My network

Conservationists' newest weapon is a simple $7 Bluetooth beacon in a 3D-printed case. Thanks to the relatively uncomplicated hardware, it weighs much less than GPS trackers.

#technology #animal #newsonleo #apple #bluetooth

The use of tiny Bluetooth beacons in wildlife tracking is a relatively new and innovative approach that has gained significant attention in recent years. Here's a more detailed overview of the technology and its potential applications:

How it Works

The Bluetooth beacons, also known as Low Energy Beacons (LEBs), are small devices that can be attached to animals or objects in the wild. They use Bluetooth low-energy technology to broadcast a unique identifier that can be detected by nearby iOS devices. When an iPhone detects the beacon, it anonymously reports its position to researchers, creating a crowdsourced network of location data.

The process works as follows:

  1. The beacon is attached to the animal or object, and a small battery powers it for an extended period.
  2. The beacon broadcasts a unique identifier, which is detectable by nearby iOS devices.
  3. When an iPhone detects the beacon, it uses the Find My network to determine the beacon's location.
  4. The iPhone reports the beacon's location to researchers, who can use this data to track the animal's movements.

Advantages

The use of Bluetooth beacons in wildlife tracking offers several advantages over traditional GPS tracking methods. Some of the key benefits include:

  • Low cost: The beacons are relatively inexpensive, with a price tag of around $7 per device.
  • Power efficiency: The beacons require minimal power, making them suitable for deployment in remote areas where batteries may not be readily available.
  • Easy deployment: The devices can be easily attached to animals or objects, and they do not require any specialized equipment or expertise.
  • Hands-free tracking: The beacons require no hands-on tracking or recoveries, reducing the workload and costs associated with traditional tracking methods.

Limitations

While the Bluetooth beacons offer several advantages, they are not without limitations. Some of the key challenges include:

  • Positional error: The beacons can experience high positional errors, particularly in areas with heavy traffic or signal blocking.
  • Deterioration in sparsely populated areas: The trackers can become less effective in areas with limited mobile device coverage, making it essential to deploy multiple beacons in these areas.
  • Interference: The beacons can be affected by interference from other Bluetooth devices, which can reduce their accuracy.

Future Directions

Researchers are exploring ways to overcome these limitations, including:

  • Building networks of receivers: Using Arduino, Raspberry Pi, or ESP32 boards to build networks of receivers that can improve the accuracy of the beacons in sparsely populated areas.
  • Increasing the number of beacons: Deploying more beacons in the wild can improve the accuracy of the trackers, as more mobile devices will report locations to the central base.
  • Improving signal strength: Researchers are working on improving the signal strength of the beacons, which can reduce positional errors and improve overall accuracy.

Real-World Applications

The use of Bluetooth beacons in wildlife tracking has a wide range of potential applications in various fields, including:

  • Conservation: The beacons can be used to track the movements of endangered species, which can help conservationists develop more effective conservation strategies.
  • Research: The beacons can be used to track the movements of animals in various research settings, which can help scientists understand animal behavior and ecology.
  • Monitoring: The beacons can be used to monitor the movements of animals in various environments, which can help researchers understand the impact of human activity on wildlife habitats.

Hollywood Crews Are on The Brink of Losing Everything

#hollywood #technology #industry

Instagram is lowering video quality for unpopular videos

The popularity of an Instagram video can affect its actual video quality: According to Adam Mosseri (the Meta executive who leads Instagram and Threads),

The popularity of an Instagram video can affect its actual video quality: According to Adam Mosseri (the Meta executive who leads Instagram and Threads), videos that are more popular get shown in higher quality, while less popular videos get shown in lower quality.

#instagram #video #socialmedia #technology #meta #newsonleo

In a video (via The Verge), Mosseri said Instagram tries to show “the highest-quality video that we can,” but he said, “if something isn’t watched for a long time — because the vast majority of views are in the beginning — we will move to a lower quality video.”

This isn’t totally new information; Meta wrote last year about using different encoding configurations for different videos depending on their popularity. But after someone shared Mosseri’s video on Threads, many users had questions and criticisms, with one going as far to describe the company’s approach as “truly insane.”

The discussion prompted Mosseri to offer more detail. For one thing, he clarified that these decisions are happening on an “aggregate level, not an individual level,” so it’s not a situation where individual viewer engagement will affect the quality of the video that’s played for them.

“We bias to higher quality (more CPU intensive encoding and more expensive storage for bigger files) for creators who drive more views,” Mosseri added. “It’s not a binary [threshold], but rather a sliding scale.”

Samsung speeds up development of a new breed of memory that combines RAM and SSD properties

The core concept behind SOM is using unique chalcogenide materials that perform double duty as both the memory cell and the selector device.

#samsung #chip #technology #ram #ssd

Forward-looking: Samsung is working to accelerate the development of a promising new memory technology called Selector-Only Memory. The latest tech combines the non-volatility of flash storage and DRAM's lightning-fast read/write speeds, making it a potential game-changer. Furthermore, manufacturers can stack the chips for higher densities.

The core concept behind SOM is using unique chalcogenide materials that perform double duty as both the memory cell and the selector device. In traditional phase-change or resistive RAM, you need a separate component, like a transistor, to act as the selector to activate each cell. Conversely, the chalcogenide material in SOM switches between conductive and resistive states to store data.

Of course, not just any chalcogenide composition will do the trick. The materials must have optimal properties for memory performance and selector functionality. To find the right candidate, Samsung used advanced computer modeling to predict the potential of various material combinations. The company estimates that over 4,000 potential chalcogenide mixtures could work for SOM. Unfortunately, sorting through all those possibilities with physical experiments would be a nightmare in terms of cost and time.

US Copyright Office denies DMCA exemption, deals blow to video game preservation efforts

The US Copyright Office has dealt a significant blow to video game preservation efforts by denying a request for a Digital Millennium Copyright Act (DMCA) exemption

In context: Video game preservation efforts have experienced another setback in their ongoing dialogue with copyright stakeholders. As they work to preserve digital culture, preservationists must find a way to balance commercial interests with historical and scholarly needs.

#video #gaming #copyright #dmca #newsonleo

Haha have a feeling the creators wont like that at all 🤣

No not likely popular with them.

But then again, with where things are heading, it doesnt matter who likes it. Big Tech, at this moment, is in charge and AI is roaring ahead.

It is up to us to keep pushing it further out.

But at the same time they still want humans to give them data I guess?

People are doing that. They upload videso each day to YouTube...are posting on Facebook and X. So the idea of people stopping providing data is not on the agenda.

Of course, people are going to interact with synthetic data more. So it is going to make it even more powerful.

Ofc, but I thought Meta would not want to lose content creators to lets say google or x.

They want to contain the datafeeder in their ecosystem

OpenAI transcription tool faces scrutiny over fabricated text in medical transcriptions

OpenAI's transcription tool called Whisper has come under fire for a significant flaw: its tendency to generate fabricated text, known as hallucinations.

Facepalm: It is no secret that generative AI is prone to hallucinations, but as these tools make their way into critical settings like healthcare, alarm bells are ringing. Even OpenAI warns against using its transcription tool in high-risk settings. Despite these warnings, the medical sector has moved forward with adopting Whisper-based tools.

#openai #medical #Transcript #technology #whisper

How to Turn Audio to Text using OpenAI Whisper

Do you know what OpenAI Whisper is?

It’s the latest AI model from OpenAI that helps you to automatically convert speech to text.

Transforming audio into text is now simpler and more accurate, thanks to OpenAI’s Whisper.

This article will guide you through using Whisper to convert spoken words into written form, providing a straightforward approach for anyone looking to leverage AI for efficient transcription.

#openai #Whisper #ai

Introduction to OpenAI Whisper
OpenAI Whisper is an AI model designed to understand and transcribe spoken language. It is an automatic speech recognition (ASR) system designed to convert spoken language into written text.

Its capabilities have opened up a wide array of use cases across various industries. Whether you’re a developer, a content creator, or just someone fascinated by AI, Whisper has something for you.

Let's go over some its key features:

  1. Transcription services: Whisper can transcribe audio and video content in real-time or from recordings, making it useful for generating accurate meeting notes, interviews, lectures, and any spoken content that needs to be documented in text form.

  2. Subtitling and closed captioning: It can automatically generate subtitles and closed captions for videos, improving accessibility for the deaf and hard-of-hearing community, as well as for viewers who prefer to watch videos with text.

  1. Language learning and translation: Whisper's ability to transcribe in multiple languages supports language learning applications, where it can help in pronunciation practice and listening comprehension. Combined with translation models, it can also facilitate real-time cross-lingual communication.

  2. Accessibility tools: Beyond subtitling, Whisper can be integrated into assistive technologies to help individuals with speech impairments or those who rely on text-based communication. It can convert spoken commands or queries into text for further processing, enhancing the usability of devices and software for everyone.

  3. Content searchability: By transcribing audio and video content into text, Whisper makes it possible to search through vast amounts of multimedia data. This capability is crucial for media companies, educational institutions, and legal professionals who need to find specific information efficiently.

  1. Voice-controlled applications: Whisper can serve as the backbone for developing voice-controlled applications and devices. It enables users to interact with technology through natural speech. This includes everything from smart home devices to complex industrial machinery.

  2. Customer support automation: In customer service, Whisper can transcribe calls in real time. It allows for immediate analysis and response from automated systems. This can improve response times, accuracy in handling queries, and overall customer satisfaction.

  3. Podcasting and journalism: For podcasters and journalists, Whisper offers a fast way to transcribe interviews and audio content for articles, blogs, and social media posts, streamlining content creation and making it accessible to a wider audience.

OpenAI's Whisper represents a significant advancement in speech recognition technology.

With its use cases spanning across enhancing accessibility, streamlining workflows, and fostering innovative applications in technology, it's a powerful tool for building modern applications.

How to Work with Whisper
Now let’s look at a simple code example to convert an audio file into text using OpenAI’s Whisper. I would recommend using a Google Collab notebook.

Before we dive into the code, you need two things:

OpenAI API Key
Sample audio file
First, install the OpenAI library (Use ! only if you are installing it on the notebook):

!pip install openai
Now let’s write the code to transcribe a sample speech file to text:

#Import the openai Library
from openai import OpenAI

Create an api client

client = OpenAI(api_key="YOUR_KEY_HERE")

Load audio file

audio_file= open("AUDIO_FILE_PATH", "rb")

Transcribe

transcription = client.audio.transcriptions.create(
model="whisper-1",
file=audio_file
)

Print the transcribed text

print(transcription.text)

This script showcases a straightforward way to use OpenAI Whisper for transcribing audio files. By running this script with Python, you’ll see the transcription of your specified audio file printed to the console.

Feel free to experiment with different audio files and explore additional options provided by the Whisper Library to customize the transcription process to your needs.

Tips for Better Transcriptions
Whisper is powerful, but there are ways to get even better results from it. Here are some tips:

Clear audio: The clearer your audio file, the better the transcription. Try to use files with minimal background noise.
Language selection: Whisper supports multiple languages. If your audio isn’t in English, make sure to specify the language for better accuracy.
Customize output: Whisper offers options to customize the output. You can ask it to include timestamps, confidence scores, and more. Explore the documentation to see what’s possible.
Advanced Features
Whisper isn’t just for simple transcriptions. It has features that cater to more advanced needs:

Real-time transcription: You can set up Whisper to transcribe the audio in real time. This is great for live events or streaming.
Multi-language support: Whisper can handle multiple languages in the same audio file. It’s perfect for multilingual meetings or interviews.
Fine-tuning: If you have specific needs, you can fine-tune Whisper’s models to suit your audio better. This requires more technical skill but can significantly improve results.
Conclusion
Working with OpenAI Whisper opens up a world of possibilities. It’s not just about transcribing audio – it’s about making information more accessible and processes more efficient.

Whether you’re transcribing interviews for a research project, making your podcast more accessible with transcripts, or exploring new ways to interact with technology, Whisper has you covered.

OpenAI's transcription tool called Whisper has come under fire for a significant flaw: its tendency to generate fabricated text, known as hallucinations. Despite the company's claims of "human level robustness and accuracy," experts interviewed by the Associated Press have identified numerous instances where Whisper invents entire sentences or adds non-existent content to transcriptions.

The issue is particularly concerning given Whisper's widespread use across various industries. The tool is employed for translating and transcribing interviews, generating text for consumer technologies, and creating video subtitles.

Perhaps most alarming is the rush by medical centers to implement Whisper-based tools for transcribing patient consultations, even though OpenAI has given explicit warnings against using the tool in "high-risk domains."

Instead, the medical sector has embraced Whisper-based tools. Nabla, a company with offices in France and the US, has developed a Whisper-based tool used by over 30,000 clinicians and 40 health systems, including the Mankato Clinic in Minnesota and Children's Hospital Los Angeles.

Introducing Whisper
We’ve trained and are open-sourcing a neural net called Whisper that approaches human level robustness and accuracy on English speech recognition.

Whisper is an automatic speech recognition (ASR) system trained on 680,000 hours of multilingual and multitask supervised data collected from the web. We show that the use of such a large and diverse dataset leads to improved robustness to accents, background noise and technical language. Moreover, it enables transcription in multiple languages, as well as translation from those languages into English. We are open-sourcing models and inference code to serve as a foundation for building useful applications and for further research on robust speech processing.

The Whisper architecture is a simple end-to-end approach, implemented as an encoder-decoder Transformer. Input audio is split into 30-second chunks, converted into a log-Mel spectrogram, and then passed into an encoder. A decoder is trained to predict the corresponding text caption, intermixed with special tokens that direct the single model to perform tasks such as language identification, phrase-level timestamps, multilingual speech transcription, and to-English speech translation.

Other existing approaches frequently use smaller, more closely paired audio-text training datasets,1 2, 3 or use broad but unsupervised audio pretraining.4, 5, 6 Because Whisper was trained on a large and diverse dataset and was not fine-tuned to any specific one, it does not beat models that specialize in LibriSpeech performance, a famously competitive benchmark in speech recognition. However, when we measure Whisper’s zero-shot performance across many diverse datasets we find it is much more robust and makes 50% fewer errors than those models.

About a third of Whisper’s audio dataset is non-English, and it is alternately given the task of transcribing in the original language or translating to English. We find this approach is particularly effective at learning speech to text translation and outperforms the supervised SOTA on CoVoST2 to English translation zero-shot.

We study the capabilities of speech processing
systems trained simply to predict large amounts of
transcripts of audio on the internet. When scaled
to 680,000 hours of multilingual and multitask
supervision, the resulting models generalize well
to standard benchmarks and are often competitive
with prior fully supervised results but in a zeroshot transfer setting without the need for any finetuning. When compared to humans, the models
approach their accuracy and robustness. We are
releasing models and inference code to serve as
a foundation for further work on robust speech
processing.

Tesla Model Y ‘Juniper’ spotted in China with several big changes

The Tesla Model Y refresh, coded “Juniper,” has been spotted in China with several big changes to its exterior as the automaker plans to revamp its best-selling vehicle.

It appears the new look will first launch in China but will likely not be put out for deliveries until next year. Tesla CEO Elon Musk has maintained that the new Model Y will not be out in 2024 on two separate occasions this year.

That does not mean Tesla is not already working on something, and perhaps it is updating the Model Y’s look just as it did with the Model 3 over the past couple of years.

#tesla #modely #juniper #china

Based on the images, we can see that there are a few changes with the Model Y that Tesla is looking to implement, especially with lighting.

The images from the rear and rear quarter panel suggest there is a light bar, bringing a major change to the overall look and aesthetic of the Tesla Model Y that is currently offered.

Reports from China today suggest the Model Y will have a full-width LED light bar, much like what was unveiled on the Tesla Cybercab Robotaxi earlier in October.

Chinese automotive insider SugarDesign said on Weibo the vehicle will equip this style of light bar, bringing a more modern design to the vehicle (via Google Translate):

“With the release of new spy photos, Tesla’s mid-term update Model Y is getting closer and closer to us. The split headlight layout on the front face is already very obvious, and it seems to confirm the through-type headlight design mentioned by the “internal netizen” before.”

iPhone SE 4 to debut early next year with Face ID, OLED display, and Apple's A18 chip

The latest info on the iPhone SE 4 comes from Apple analyst Ming-Chi Kuo.

Rumor mill: Apple's upcoming budget iPhone SE 4 handset is set to launch early next year. We had previously heard that the iPhone SE would receive a slew of features that have trickled down from Cupertino's latest phones, including Apple Intelligence support and Face ID. Now, new leaks have revealed even more details about the fourth-generation iPhone SE.

#apple #iphone #faceid #smartphone #technology #newsonleo

The latest info on the iPhone SE 4 comes from Apple analyst Ming-Chi Kuo. He writes in a Medium post that mass production of the company's cheapest handset will begin this December, with projected production numbers of around 8.6 million units from December 2024 to the first quarter of 2025. It's seems likey that the phone will launch in either March or April, which lines up with reports from last July.

Earlier this month, Bloomberg's Apple expert Mark Gurman wrote that the updated iPhone SE, codenamed V59, will be the company's new entry-level model, but it will feature a number of upgrades over the last update: the 2022 iPhone SE (third generation), which offered 5G connectivity as the most notable improvement over the 2nd-gen handset.

Probably the biggest upgrade in the iPhone SE 4 will be the removal of the home button, meaning the handset will join the other Apple devices that use Face ID. It's said to closely resemble the iPhone 14, including the notch cutout, and use an OLED instead of LCD display.

Gurman also said that the iPhone SE 4 will likely be powered by the same A18 chip found in the iPhone 16 and 16 Plus, thereby enabling support for Apple Intelligence.

SpaceX Takes "One Giant Leap" for Space Tech || Peter Zeihan

#spacex #space #manufacturing #technology

SpaceX Takes "One Giant Leap" for Space Tech

In a recent video, geopolitical analyst Peter Zeihan discusses the significant impact of SpaceX's successful booster catch on the future of space technology. This achievement marks a major milestone in the reusability of rockets, drastically reducing the cost of launching payloads into orbit.

The Cost of Space Travel

Historically, launching objects into orbit was an incredibly expensive endeavor. In the 1970s and 80s, the cost could exceed $50,000 per kilogram. However, SpaceX's innovations, such as reusable rockets and boosters, have significantly reduced this cost to around $1,500 per kilogram. With the successful booster catch, this cost is expected to decrease even further, potentially reaching $500 per kilogram or less.

The New Era of Space Economics

This dramatic reduction in launch costs opens up new possibilities for space exploration and commercial activities. Zeihan identifies four key areas that could benefit from these advancements:

  1. Lenses: High-precision lenses, crucial for semiconductor manufacturing, could be produced in the microgravity environment of space, leading to more advanced and efficient chips.
  1. Drugs: Proteins, essential components of many drugs, can be grown in space without the limitations imposed by Earth's gravity. This could enable the development of more complex and effective medications.
  2. Fiber Optic Cables: Specialized fiber optic cables, capable of transmitting vast amounts of data, could be manufactured in space with greater precision and quality.
  3. Quantum Computing: The precise conditions required for quantum computing could be achieved more easily in space, accelerating the development of this revolutionary technology.

The Future of Space

Zeihan envisions a future where space becomes a platform for manufacturing, research, and innovation. satellite manufacturing facilities in orbit could reduce the cost of communication and data transmission. Additionally, advancements in space technology could pave the way for future missions to the moon and Mars.

In conclusion, SpaceX's successful booster catch is a game-changer for the space industry. By reducing launch costs and enabling new possibilities, this achievement could usher in a new era of space exploration and commercial activity.

Let's dive deeper into the space manufacturing sector, exploring the products, technologies, and forecasts for the next decade.

In-Orbit Assembly and Construction

In-orbit assembly and construction is the process of building or assembling structures, spacecraft, or satellites in space using robotic ARMs, grippers, and other tools. This technology has been demonstrated on the International space Station (ISS) and will play a crucial role in enabling the construction of larger, more complex space structures.

Companies like Nanoracks, Made In Space, and Bigelow Aerospace are developing and demonstrating in-orbit assembly technologies. These technologies include:

  1. Robotic arms and grippers: These are used to manipulate and assemble components in space. Examples include the Canadarm2 on the ISS and the robotic ARM on NASA's Space Launch System (SLS) rocket.
  2. In-orbit assembly machines: These are specialized machines that can print, assemble, or fabricate structures in space. Examples include the Made In Space's 3D printer and the Nanoracks M3.
  3. Modular construction: This involves building structures in parts, which are then assembled in space using robotic arms or other tools.

3D Printing and Additive Manufacturing

3D printing and additive manufacturing are technologies that create objects by adding materials layer by layer, rather than subtracting them. In space, this technology has the potential to revolutionize the way we manufacture and assemble structures.

Companies like Made In Space, NASA, and the European Space Agency (ESA) are developing 3D printing technologies for space applications. These technologies include:

  1. In-space 3D printing: This involves printing objects in space using materials like plastic, metal, or ceramic.
  1. Hybrid 3D printing: This combines additive and subtractive manufacturing techniques to create complex structures.
  2. Inflatable 3D printing: This involves printing structures that can be inflated to create a larger, more complex shape.

Material Processing and Recycling

Material processing and recycling are critical technologies for space manufacturing, as they enable the recovery and reuse of materials from space missions.

Companies like NASA, the ESA, and private ventures are developing technologies to process and recycle materials in space. These technologies include:

  1. Plasma-based material processing: This involves using plasma to process and transform materials in space.
  2. Advanced recycling technologies: These involve recovering and reusing materials from space missions, such as recycling metals or plastics.
  3. Closed-loop life support systems: These involve recycling air, water, and other resources in space to minimize waste and maximize resource utilization.

Propulsion Systems

Propulsion systems are critical for space missions, as they enable the transportation of spacecraft and cargo to and from space.

Companies like NASA, the ESA, and private ventures are developing advanced propulsion systems for space applications. These technologies include:

  1. Ion engines: These are electric propulsion systems that use ions to generate thrust.
  2. Hall effect thrusters (HETs): These are electric propulsion systems that use a magnetic field to generate thrust.
  3. Advanced ion engines: These are next-generation ion engines that offer improved specific impulse and efficiency.

Life Support and Air Recycling

Life support and air recycling are critical technologies for space missions, as they enable the survival of astronauts and crew members in space.

Companies like NASA, the ESA, and private ventures are developing advanced life support systems for space applications. These technologies include:

  1. Closed-loop life support systems: These involve recycling air, water, and other resources in space to minimize waste and maximize resource utilization.
  2. Advanced air recycling technologies: These involve recovering and reusing oxygen and other gases in space.
  3. Air revitalization systems: These involve removing carbon dioxide and other gases from the air and releasing oxygen and other gases.

Forecast for the Next Decade

By 2030, space manufacturing is expected to become a significant sector in the space industry, driven by the growing need for sustainable and reliable access to space.

Here are some predictions for the next decade:

  1. In-orbit assembly and construction will become increasingly common, with companies like Nanoracks, Made In Space, and Bigelow Aerospace developing and demonstrating their technologies.
  2. 3D printing and additive manufacturing will continue to advance, enabling the creation of complex structures and components in space.
  3. Space-based material processing and recycling will become more widespread, with companies like NASA and private ventures developing technologies to recover and reuse materials in space.
  1. Advanced propulsion systems will play a crucial role in enabling sustainable and efficient space missions, with ion engines and HETs becoming more prevalent.
  2. Closed-loop life support systems will be widely adopted in space missions, enabling long-duration missions and reducing reliance on resupply missions.
  3. Space manufacturing will drive the development of new business models, such as satellite servicing, space-based manufacturing, and lunar or Mars-based industries.
  4. Governments and private companies will invest heavily in space manufacturing infrastructure, including launch facilities, manufacturing facilities, and research and development centers.

Overall, the next decade will see significant advancements in space manufacturing, enabling more sustainable and reliable access to space, and paving the way for the development of a thriving space-based industry.

Tesla cars are driving a reported 6.4 million miles PER DAY on FSD.

That is something that makes it impossible to catch. No other company is close to the amount of data being collected to train the model on.

Thats crazy amount of data. They probably know more about roads and cars than any other company/government.

It is only increasing. That is why those who compare Waymo to Tesla are crazy. The amount of miles driven, i.e. data, autonomously is heavily in Tesla's favor.

Any AI application is only a matter of data, algorithms and compute.

Tesla has more data about autonomous driving, and soon 100K cluster to train said data on. We will presume they have software engineers who really know how to design algorithms.

Yeah probably been a plan all a long from Elon. X was maybe just a startingpoint to have data to train on and build the skills.

I cant say if Elon looked at the data component when he bought X but I think he did realize it very quickly.

Either way, he is sitting on one of the sites that have a ton generated each day. Half a billion people appear to be adding to his database each day.

Is it just me who think its a little to big coincidence that he bought X before the AI rush? Think he saw whatnwas coming.

Biden administration announces $3 billion in funding for rural electric cooperatives to promote renewable energy and reduce electricity rates.

The Biden administration announced more than $3 billion Friday in funding for seven rural electric cooperatives, part of a broader effort to promote renewable energy in rural areas.

#biden #electricity #rural #energy

The grants include nearly $2.5 billion in financing for the Tri-State Generation and Transmission Association, as well as nearly $1 billion through the Department of Agriculture’s Empowering Rural America (New ERA) program for six co-ops. The New ERA program, which uses $9.7 billion in Inflation Reduction Act funds, is the biggest federal investment in rural electrification since the New Deal in the 1930s.

The Tri-State Generation and Transmission Association funding will cut electricity rates for members by an estimated 10 percent over the next 10 years, equivalent to about $430 million in benefits to rural electricity consumers.

Meanwhile, the six co-ops announced Friday, some of which will serve rural areas in multiple states, are in Minnesota, South Dakota, South Carolina, Colorado, Nebraska and Texas.

“The Inflation Reduction Act makes the largest investment in rural electrification since FDR and the New Deal in the 1930s,” said John Podesta, senior adviser to the president for international climate policy. “Today’s awards will bring clean, affordable, reliable power to rural Americans from Colorado to Texas to South Carolina.”

OpenAI disbands another team focused on advanced AGI safety readiness

OpenAI has shut down its AGI Readiness Team, a group responsible for developing safeguards around advanced artificial intelligence systems.

The team focused on the safety of artificial general intelligence (AGI), which OpenAI defines in economic terms as AI systems capable of operating autonomously and automating a wide range of human tasks. The team members will be reassigned to other departments within the company.

#openai #safety #agi #technology #generativeai

Must be an almost impossible task to build up rulesets around AI so it wont be used for "evil".

It probably needs to be conscious for that...

There are guardrails that are going into place.

A big piece of the equation, in my view, is open source. That way it is in the open and everyone can see what is being done.

The danger is having to trust the likes of Sam Altman who is seeking regulatory capture so he is guaranteed to succeed.

Agree with you there

Miles Brundage, OpenAI's outgoing Senior Advisor for AGI Readiness, expresses serious concerns about this development as he announces his departure from the company. "In short, neither OpenAI nor any other frontier lab is ready, and the world is also not ready," Brundage states in a detailed public statement

Former internal AGI readiness advisor warns of lack of regulation
Brundage points to significant gaps in AI oversight, noting that tech companies have strong financial motivations to resist effective regulation. He emphasizes that developing safe AI systems requires deliberate action from governments, companies, and civil society rather than occurring automatically.

Following his departure, Brundage plans to either establish or join a non-profit organization, saying he can have more impact working outside the industry. "I think AI is unlikely to be as safe and beneficial as possible without a concerted effort to make it so." His team developed the five stages of AI progress for OpenAI.

This latest shutdown follows OpenAI's decision in May to disband its Superalignment team, which studied long-term AI safety risks. At that time, team leader Jan Leike publicly criticized the company, stating that "security culture and processes have to take a back seat to "shiny products."

Costco partners with Electric Era to bring back EV charging in the U.S.

Costco, known for its discount gas stations, has left EV drivers in need of juicing up out in the cold for the past 12 years. But that seems about to change now that the big-box retailer is putting its brand name on a DC fast-charging station in Ridgefield, Washington.

After being one of the early pioneers of EV charging in the 1990s, Costco abandoned the offering in 2012 in the U.S.

While opening just one station may seem like a timid move, the speed at which the station was installed — just seven weeks — could indicate big plans going forward.

#costco #electiccharging #evs #retailer #newsonleo #technology

Besides lightening-speed installation, Electric Era, the Seattle-based company making and installing the charging station, promises to offer “hyper-reliable, battery-backed fast charging technology in grid-constrained locations.”

Its stalls can deliver up to 200 kilowatts and come with built-in battery storage, allowing for lower electricity rates and the ability to remain operational even when power grids go down.

If that sounds like it could very well rival Tesla’s SuperCharger network, it’s no coincidence: Quincy Lee, its CEO, is a former SpaceX engineer.

Costco also seems confident enough in the company to have put its brand name on the EV-charging station. Last year, the wholesaler did open a pilot station in Denver, this time partnering with Electrify America, the largest charging network in the U.S. However, Costco did not put its brand name on it.

In an interview with Green Car Reports, Electric Era said it was still in talks with Costco about the opening of new locations. Last year, Costco said it was planning to install fast chargers at 20 locations, without providing further details. It has maintained EV-charging operations in Canada, the UK, Spain, and South Korea.

Meanwhile, the wholesaler’s U.S. EV-charging plans might very well resemble those of rival Walmart, which last year announced it was building its own EV fast-charging network in addition to the arrangements it already had with Electrify America.

We just got an early tease for Samsung’s next folding phones

Samsung, like most other phone manufacturers, sticks to a pretty predictable schedule: a new numbered iteration per model per year. That’s why we weren’t surprised to hear that a Galaxy Z Flip 7 and Galaxy Z Fold 7 were slated for 2025, but we now have confirmation. Their codenames just leaked, and we also learned of an unexpected third model.

#samsung #smartphone #foldingphone #galaxy #newsonleo

GalaxyClub broke the news after receiving information from Samsung. According to the site, the Galaxy Z Flip 7 is code-named B7, while the Galaxy Z Fold 7 is code-named Q7. There’s also a third model with the code name Q7M, but it’s not clear what this entry actually is. Since it bears a derivative code name, this handset is likely a spinoff of the Galaxy Z Fold 7 — though we don’t know what that entails.

GalaxyClub goes on to say that the mysterious Q7M has a development timeline that nearly matches the Galaxy Z Fold 7, but neither one is expected until summer 2025 or later. The launch of the Galaxy S25 and Galaxy A56 in the first part of 2025 would likely take too much focus off a new entry.

A few fans have suggested the Q7M could be the rumored trifold phone Samsung has in development. However, that rumor also suggests that a Galaxy Z Flip Special Edition might be on the way. If that’s the case, why would the code name be a spinoff of the Galaxy Z Fold code?

We just got our first look at this crazy-fast gaming phone

Asus has always been good at making their devices look “gamer-y.” Just take a look at the ROG Phone 8 Series, with its LED lights, Republic of Gamers branding, and somewhat ostentatious markings. Now we’ve gotten our first look at the ROG Phone 9 — at least the renders of it — as well as a few details about what’s going on under the hood.

#asus #smartphone #gaming #technology

The website 91Mobiles leaked the renders of the ROG Phone 9 series alongside a bit more information. Two different models are expected: the Asus ROG Phone 9 and ROG Phone 9 Pro, and both versions will be powered the new Snapdragon 8 Elite chipset.

We’re relatively sure the standard and the Pro models share the same design. The renders also show the ROG Phone 9 Pro working with the AeroActive Cooler X, an accessory that helps improve airflow and cooling while playing games. The renders don’t make it clear whether the accessory is the same as the cooler for the Phone 9, or if the Pro has an upgraded version.

Both phones appear to have the same hole-punch camera in the front, along with a three-camera setup on the back. As for specs, the ROG Phone 9 will have a 6.78-inch display with a 120Hz refresh rate. It’s expected to launch with 24GB of RAM and 1TB of memory (an absolutely astronomical amount for a phone), as well as a 50-megapixel primary camera, a 13MP ultrawide lens, and a 32MP telephoto lens. The front camera is also 32MP.

Apple Intelligence is out today

AI is on the way.

Apple’s AI features are finally starting to appear. Apple Intelligence is launching today on the iPhone, iPad, and Mac, offering features like generative AI-powered writing tools, notification summaries, and a cleanup tool to take distractions out of photos. It’s Apple’s first official step into the AI era, but it’ll be far from its last.

#apple #appleintelligence #ai #iphone #mac #technology #newsonleo

Apple Intelligence has been available in developer and public beta builds of Apple’s operating systems for the past few months, but today marks the first time it’ll be available in the full public OS releases. Even so, the features will still be marked as “beta,” and Apple Intelligence will very much remain a work in progress. Siri gets a new look, but its most consequential new features — like the ability to take action in apps — probably won’t arrive until well into 2025.

In the meantime, Apple has released a very “AI starter kit” set of features. “Writing Tools” will help you summarize notes, change the tone of your messages to make them friendlier or more professional, and turn a wall of text into a list or table. You’ll see AI summaries in notifications and emails, along with a new focus mode that aims to filter out unimportant alerts. The updated Siri is signified by a glowing border around the screen, and it now allows for text input by double-tapping the bottom of the screen. It’s helpful stuff, but we’ve seen a lot of this before, and it’ll hardly represent a seismic shift in how you use your iPhone.

Apple says that more Apple Intelligence features will arrive in December. ChatGPT will be available in Siri; Writing Tools will let you describe the changes you want Apple’s AI to make; and Apple’s AI camera feature — Visual Intelligence — will be able to tell you about objects around you. In the following months, Apple says that it’ll launch Priority Notifications and major upgrades for Siri, including awareness of what’s on your screen and the ability to take action within apps.

Airbnb CEO Brian Chesky on the gospel of Steve Jobs and what founder mode really means

The Airbnb cofounder discusses being ‘in the details’ and why traditional management is doing it wrong.

#airbnb #stevejobs #brianchesky #technology #management

Today, I’m talking with Airbnb cofounder and CEO Brian Chesky, who is only the second person to be on Decoder three times — the other is Meta CEO Mark Zuckerberg. It’s rare company, and what made this one particularly good is that Brian and I were together in our New York studio for the first time; it’s pretty easy to hear how much looser and more fun the conversation was because we were in the same room.

Brian made a lot of waves earlier this year when he started talking about something called “founder mode” — or at least, when well-known investor Paul Graham wrote a blog post about Brian’s approach to running Airbnb that gave it that name. Founder mode has since become a little bit of a meme, and I was excited to have Brian back on to talk about it and what specifically he thinks it means.

One of the reasons I love talking to Brian is because he spends so much time specifically obsessing over company structure and decision-making — if you listened to his previous Decoder episodes, you already had a preview of founder mode because Brian radically restructured the company after the covid-19 pandemic to get away from its previous divisional structure and transition into a more functional organization that works from a single roadmap. That allows him to have input on many more decisions.

Apple updates the iMac with new colors and an M4 chip

The M4 chip makes its way to the iMac.

Apple is updating the iMac with an M4 chip. The new iMac, announced this morning, includes an M4 chip with an 8-core CPU and up to a 10-core GPU. The entry-level model costs $1,299 with two Thunderbolt USB-C 4 ports, while the higher-end models start at $1,499 and have four ports.

#apple #imac #m4 #chip #technology #newsonleo

It’s also bundled with accessories that now use USB-C charging ports instead of Lightning. Like the prior model, the new iMac has a 24-inch, 4.5K display. However, Apple is offering a new “nano-texture glass option” for $200 extra, which is supposed to help reduce reflections and glare.

Additionally, the iMac’s base RAM has been doubled to 16GB over the prior model, with the ability to configure the higher-end option up to 32GB. Apple’s new iMac also comes with a 12MP webcam, along with new Apple Intelligence features that are starting to roll out today, such as AI-powered writing and editing features and a redesigned Siri.

The updated iMac is available to preorder today, with availability starting on November 8th. It’s available in seven colors: green, yellow, orange, pink, purple, blue, and silver. There notably isn’t a larger model available, as Apple previously confirmed it had no plans to replace the now-discontinued 27-inch model powered by Intel.

Google might stick a Tensor chip in the Pixel Watch 5

I mean, it’s been crickets from Qualcomm.

Starting in 2026, Google might go in-house with a custom Tensor processor for the Pixel Watch 5.

The rumor comes courtesy of Android Authority, which cites leaked documents from Google’s gChips division. According to the leaked plans, the wearable Tensor chip, codenamed NPT, sports a core configuration of an ARM Cortex A78 and two Arm Cortex A55s. These are older CPU cores, but that’s a fairly typical move with wearable processors. Other than that, details are scant and it’s currently unknown which process node technology the planned wearable Tensor chip might have.

#google #pixelwatch #technology #newsonleo #qualcomm

Chips aren’t usually as heavy a focus for smartwatches as they are for smartphones. So long as performance is snappy, smartwatch makers tend to focus on ways to prolong battery life without sacrificing smart features. But this is a potentially interesting development given that chip stagnation has historically been a huge obstacle for Android smartwatches.

Long story short, Android smartwatches used to be beholden to Qualcomm chips — and Qualcomm took its dandy time making processors that could keep up with the competition. (The Snapdragon Wear 2100, 3100, and 4100 were not great, Bob.) It wasn’t until Google and Samsung teamed up to create Wear OS 3 in 2021 that Qualcomm really started to feel the pressure. That chip problem manifested in Google’s own Pixel Watch lineup. The first watch was powered an older Samsung Exynos chip before the Pixel Watch 2 switched over Qualcomm’s Snapdragon W5. However, Qualcomm last launched a new wearable chip in 2022 and it’s been crickets since.

Microsoft Teams is getting threads and combined chats and channels

Teams threaded conversations are coming in 2025.

Microsoft is finally adding threaded conversations to its Microsoft Teams communications app. Threaded conversations in Teams won’t arrive until mid-2025, but ahead of that, Microsoft is also combining its separate chats and channels UI inside Teams into a single view.

#microsoft #teams #chats #ai #technology

I exclusively revealed Microsoft was planning a new chats and channels experience in my Notepad newsletter in August, and Microsoft is now bringing this unified UI to Teams in public preview in November.

“We’ve redesigned the chat and channels experience to simplify your digital workspace by bringing chats, teams, and channels into one place under Chat,” explains Jeff Teper, president of collaborative apps and platforms at Microsoft. “This integrates both chat and channels into your critical workflows, making it easier to access, triage and organize your conversations.”

This new UI fixes one of the big reasons Microsoft Teams sucks for messaging, so you no longer have to flick between separate sections to catch up on messages from groups of people or channels. You’ll be able to configure this new section to keep chats and channels separate or enable custom sections where you can group conversations and projects together.

Next-gen laptops may have a weird mix of components

Many gamers are awaiting CES 2025 with a great deal of excitement. Not only are we said to be getting Nvidia’s RTX 50-series, but we should also see some of the next-gen top gaming laptops make their debut during the event. However, according to a new leak, these next-gen laptops may not be so next-gen across the board. With a lot of processors to choose from, we might end up with configurations that focus on new GPUs while sticking to older CPUs.

#laptop #gamers #technology #computers

Given that Intel is said to be launching the laptop versions of Arrow Lake in early 2025, and AMD is working on the Ryzen AI 300 Max, one would expect some beastly laptops to be unveiled at CES 2025, but Golden Pig Upgrade Pack on Weibo begs to differ. This news was first shared by VideoCardz. While this user has been a fairly reliable source of hardware leaks up until now, it’s important to take it all with a bit of skepticism.

According to the leaker, we might primarily see laptops that use Nvidia’s RTX 50-series, but even the high-end variants might choose to stick to previous-gen laptop CPUs. This includes chips like Intel’s Raptor Lake refresh or AMD’s Zen 4 under the Ryzen 8000 moniker. More specifically, the leaker refers to the Core i7-14650HX and AMD’s Zen 4.

That would certainly be an unexpected configuration. Both AMD and Intel are said to be revealing their next-gen laptop chips during CES 2025, so it would make sense to see laptop manufacturers put those chips together with Nvidia’s best graphics cards. There could be a few reasons for such design choices.

From Reddit:

How do I get into Biotech sector resources what to study etc??

How do I get into the big biotech companies basically how do I study for it basically resourses etc If any one is working please share the experience about it I feel like thai is the next big sector that will be booming in the world that why I want to know how to stwp into this emerging field I am too curious about the things that are happenong in this sector especially with neuralink and kernal.. Help me know.

It isn't emerging. People have been working in biotech for the past 30years. We (Canada) even experienced a "gold rush" in the biotechnology sector in the mid 2000. I studied 4 years in biotechnology and 4 other years in biochemistry.

You'll need at least a master degree because this field is highly competitive for jobs. The salaries are not great, and people unusualy stay in that field for 10y before switching, unless you get a Postdoc and a cushy (not that cushy) academic position.

You need to define "biotech" since Nerualink is not biotech compared to companies like 23andMe or Theranos. As others mentioned, you need more or less a Ph.D if you want to enter a biology/biochemistry based specialty job since the pay is not good, a lot of competition, and you need to apply very specialized knowledge.

If you're more into Nerualink, Synchron you actually need an engineering degree and not a biology-based degree. And even then, it is primarily E.E. or Mech.E. not Chem.E. based degrees that have newer jobs. If you don't want to go for a Ph.D., I would recommend an engineering degree although a Biomedical Engineering degree will limit your job options.

I am more into the neuralink things more.. Can you suggest more Btw what these companies are if they ate not biotech then???

Do a PhD in molecular biology, biochemistry, cell biology, etc.

If you're particularly interested in neuralink and the like, do a PhD in neurobiology.

Netflix is making it easier to bookmark and share your favorite parts of a show

Use it to easily relive your favorite scenes — or the awful ones.

Netflix is launching a new feature called “Moments” that lets you save and share bookmarks to a specific spot in a show or movie. The feature seems like it could be a very useful way to share parts of something you’re watching with friends or on social media.

#netflix #moments #socialmedia #technology #newsonleo

To capture a moment, tap on the screen while you’re watching something on Netflix to see the various menu options, then tap on the new “Moments” button to pop up a new menu, and then tap the save button. From the Moments menu, you can also see other Moments you’ve saved for that show or movie and share Moments to various apps. You can also copy a link to the Moment to your clipboard.

Netflix gave me early access to the feature, so I used it to make a moment of the best scene in Lost — spoiler warning, obviously — which is perhaps the most Jay Peters thing to do.

It’s all pretty easy to use, and I think a lot of people are going to be sharing links to their favorite scenes with their group chats or to their social networks. I could also see Moments being widely used as a personal bookmarking tool. Instead of scouring YouTube to find the exact clip of a certain moment in a show you want to rewatch, you could just watch it from your personal library of saved clips.

If you do end up being a frequent user of Moments, it sounds like you probably won’t have to worry about having too many of them. “The number of Moments you can save depends on the length of the content,” spokesperson Dorian Rosenburg tells The Verge. “However, there’s plenty of space to save your favorite Moments, so most members won’t need to worry about a limit when it comes to saving across multiple shows and movies.”

Skyrim "was not as polished an experience on the PS3" as getting the RPG to work on Sony's console was a "Herculean effort for Bethesda"

Lead Skyrim designer and former senior Starfield systems designer Bruce Nesmith has revealed how Bethesda Game Studios struggled to properly port the fifth Elder Scrolls entry onto the PlayStation 3.

Speaking in a recent interview with VideoGamer, Nesmith explains that Skyrim was already pushing the Xbox 360's capabilities to the limit, making as much use of the console's shared memory system as it could. On the other hand, the PlayStation 3 uses split memory - a fact that left developers faced with a difficult challenge. "The PS3 had a memory architecture difference [compared to] the Xbox 360," says Nesmith.

#skyrim #gaming #gamestudios #videogamer

"So they had this bifurcation of memory where you had 50% for game logic and 50% for graphics," he continues. "And that was a hard boundary, you couldn't break that. Whereas the 360 had a single block of memory and it was up to you how you wanted to divide it up." As Nesmith states, Sony's system indeed had its advantages over Microsoft's own - but certainly not in terms of Skyrim's grueling porting process.

"I remember the enormous amount of effort our programmers put into making it work at all on the PS3," says the former lead. "It was a Herculean effort, and my hat's off to everybody on that team who did that work, because that was thankless, hard, long hours to make that happen at all." Despite the hard work, though, the port still suffered upon its release. Fans may recall frame rate drops, lag, and lower-quality visuals.

Genome Breakthrough Brings Scientists One Step Closer to Reviving Extinct Thylacine

Scientists are now one step closer to reviving the thylacine, thanks to key advances in genomic and reproductive technology that also provide hope for protecting endangered living marsupials.

Colossal Biosciences, a company involved in creating de-extinction technologies, says it has nearly completed reconstructing the thylacine genome, thanks in part to a serendipitous discovery that the company says helped to advance its research into reviving the enigmatic species.

#genome #scientists #thylacine #technology

The thylacine, also known as the Tasmanian tiger, was a marsupial resembling a canine believed to have gone extinct in 1935 due to human overhunting. Unlike the cautionary tale presented in Jurassic Park, experts believe the thylacine could be returned to its former ecosystem with relative safety.

Two years ago, Colossal announced its “de-extinction” project aiming to revive the creature. The company also focuses on the mammoth and preserving endangered species that are not extinct.

One of the company’s latest breakthroughs on the project owes much to luck. Last year, a thylacine head preserved in ethanol was discovered hidden away in a cupboard in a Melbourne museum. Crucially, the soft tissues of the 110-year-old sample were well maintained.

Generally, long sequences of DNA break down shortly after death, yet, in this case, tissue preservation was so thorough that rare and delicate genetic material survived for over a century. Notably, the rare RNA preserved on this unique specimen varies by tissue, and a complete head presented RNA from various parts, such as the eyes and tongue.

GPT-5: everything we know so far about OpenAI’s next frontier model

There’s perhaps no product more hotly anticipated in tech right now than GPT-5. Rumors about it have been circulating ever since the release of GPT-4, OpenAI’s groundbreaking foundational model that’s been the basis of everything the company has launched over the past year, such as GPT-4o, Advanced Voice Mode, and the OpenAI o1-preview.

#gpt5 #openai #llm #Frontiermodel #technology

Those are all interesting in their own right, but a true successor to GPT-4 is still yet to come. Now that it’s been over a year a half since GPT-4’s release, buzz around a next-gen model has never been stronger.

When will GPT-5 be released?
OpenAI has continued a rapid rate of progress on its LLMs. GPT-4 debuted on March 14, 2023, which came just four months after GPT-3.5 launched alongside ChatGPT. OpenAI has yet to set a specific release date for GPT-5, though rumors have circulated online that the new model could arrive as soon as late 2024.

Then again, some were predicting that it would get announced before the end of 2023, and later, this summer. I wouldn’t put a lot of stock in what some AI enthusiasts are saying online.

In June, former CTO Mira Murati affirmed that the next-gen model was still a year and a half out from release, which would certainly rule out a 2024 timeframe.

WordPress forces user conf organizers to share social media credentials, arousing suspicions

One told to take down posts that said nice things about WP Engine

Organisers of WordCamps, community-organized events for WordPress users, have been ordered to take down some social media posts and share their login credentials for social networks.

#wpengine #wordpress #socialmedia #technology

The order to share creds came from an employee of Automattic, the WordPress host whose CEO happens to be Matt Mullenweg, co-creator of WordPress. A letter sent to WordCamp organizers explains that the creds are needed due to "recurrent issues with new organizing teams losing access to the event's social media accounts."

So far, so sensible.

But the requirement to share creds comes in the middle of a nasty spat in the WordPress community, sparked by Mullenweg's efforts to have rival hosting biz WP Engine license the WordPress trademark or devote more staff to working on the open source content management system's code. Mullenweg argues that private-equity-controlled WP Engine is not acting the in spirit of open source by profiting from WordPress. WP Engine contends that it does plenty for the community.

One source of support for WP Engine was WordCamp Sydney, which recently used its X account to argue that the hosting biz sponsoring its events was a valuable contribution to the WordPress community. "It's not just about contributing dev back to core," event organizers argued.

We'd link to the Xeet in which WordCamp Sydney made that observation, but another Automattic employee wrote to the event's organizers with a request to take it down – on grounds that it did not "align with the Community Team's view." The Community Team includes Automattic staffers who help with WordCamps.

Disney Goes all in on AI as Hollywood Keeps Going Broke?!

#hollywood #disney #ai

Here is an in-depth summary in article form for the video:

Clownfish TV discusses Disney's potential major AI initiative and its implications for the entertainment industry. The video highlights the increasing reliance on AI in various sectors, including post-production, visual effects, and theme park experiences.

Key Points:

  • AI Integration in Hollywood: Disney is rumored to be investing heavily in AI technology to streamline production processes and reduce costs. This could lead to job losses and a shift towards AI-driven automation.
  • Impact on Creative industries: The rise of AI raises concerns about the future of creative jobs, as AI tools become more sophisticated and capable of generating content.
  • Ethical Considerations: The video touches on the ethical implications of AI, particularly regarding the use of AI to manipulate consumer behavior and collect personal data.
  • Economic Implications: The increasing use of AI in the entertainment industry could lead to significant cost savings for studios, but it may also result in job losses and a decline in creative quality.

Overall, the video paints a mixed picture of the future of AI in the entertainment industry. While AI has the potential to revolutionize the industry, it also poses significant challenges for creators and consumers alike.

AI speeds up X-Ray diagnosis for broken bones

New research says the technology can help overworked medical professional who can miss the issue

The National Institute for Health and Care Excellence, or NICE, health assessment body reveals that research suggests that the technology could speed up diagnosis and ease the demands on clinicians by reducing the need for follow-up appointments.

Four AI tools are to be recommended for use in urgent care in England. Each image will be checked by a medical professional to ensure that the bots aren’t working alone.

#xray #ai #technology #healthcare

NICE says that broken bones are missed in 3-10 per cent of cases, making it the most common diagnostic error in emergency departments.

Mark Chapman, director of health technology at NICE, thinks it will make medics’ jobs easier.

“These AI technologies are safe to use and could spot fractures which humans might miss, given the pressure and demands these professional groups work under,” Chapman said.

Elon Musk said that Optimus will be the biggest product that a company ever created.

Do you agree with this?

LinkedIn says it has verified 55 million users in effort to combat AI's spread of scams, misinformation

LinkedIn is trying to thwart the spread of misinformation fueled by the rise of artificial intelligence by verifying users — more than 55 million so far.

LinkedIn has verified more than 55 million of its users, for free, in order to combat the spread of misinformation fueled by the rise of artificial intelligence, the company told CNBC.

#linkedin #humans #socialmedia #ai

The Microsoft-owned service said it has the most verified individual human identities of any major social network. In November, the company will begin showing its user verification badges within the primary LinkedIn feed.

"You now see things like deep-fake videos, photos that are increasingly harder with the naked eye to understand if they're real or fake," Oscar Rodriguez, LinkedIn's vice president of trust and safety, told CNBC in an interview. "That line-blurring is what we believe poses a significant challenge in combating things like misinformation, faking expertise and so forth."

LinkedIn began verifying users in April 2023. The move followed social media platform X's decision in November 2022 to require users who wanted a verification badge to subscribe to its premium service, and came shortly after Meta launched Meta Verified, a subscription service that allowed Facebook and Instagram users to receive verification badges for their profiles.

Oracle applies to join Epic and others in new federal medical record network

Oracle announced Monday that it intends to join a federally-backed medical information exchange network called TEFCA

#oracle #healthcare #database #tefca #newsonleo #technology

Oracle, the chairman and co-founder of Oracle Corp., has made a significant announcement by joining the Trusted exchange Framework and Common Agreement (TEFCA), a federally-backed medical network designed to simplify the sharing of patient data between clinics, hospitals, and insurance companies. This move marks a major milestone in the industry's efforts to standardize data-sharing practices and improve patient care.

TEFCA, launched in December, aims to create a national platform for health-care organizations to share patients' data in a secure and standardized manner. The network is open to aLL qualified health information networks (QHINs), which volunteer to participate and undergo a two-step approval process to ensure they meet the necessary technical and legal requirements. Oracle's decision to join TEFCA is significant, as it is the latest major vendor to support the network, following its chief rival Epic Systems.

Oracle's acquisition of Cerner, a leading medical records giant, for $28 billion in 2022, has given it a strong foothold in the health-care industry. By joining TEFCA, Oracle is demonstrating its commitment to improving patient care and data sharing, and its willingness to work with other industry players to achieve this goal. The company's participation in TEFCA is a significant step towards addressing the complex issue of sharing medical records between different health-care organizations, which is notoriously difficult due to data being stored in various formats across dozens of vendors.

This lack of interoperability can lead to delays, errors, and even patient harm. Oracle's commitment to interoperability is a welcome change from its competitor Epic, which has been accused of dragging its feet on interoperability efforts. In an interview with CNBC, Seema Verma, executive vice president and general manager of Oracle Health and life sciences, emphasized Oracle's commitment to interoperability, stating, "We are not into information blocking. We don't have that reputation." This statement is a direct contrast to Epic's reputation, which has been criticized for its reluctance to share data with other health-care organizations.

Oracle's decision to join TEFCA is also a response to the industry's growing recognition of the importance of data sharing. The network's ultimate goal is to standardize the legal and technical requirements for sharing patients' data, making it easier for doctors and other providers to access relevant patient information. The seven QHINs currently participating in TEFCA, including Epic, will need to undergo a two-step approval process to ensure they meet the necessary technical and legal requirements. Oracle has announced its intention to begin this process, and its approval is expected to bolster the network's credibility and further its goals.

In conclusion, Oracle's decision to join TEFCA is a significant development in the health-care industry's efforts to improve patient care and data sharing. By participating in the network, Oracle is demonstrating its commitment to interoperability and its willingness to work with other industry players to achieve this goal. As the industry continues to evolve, Oracle's participation in TEFCA is likely to have a positive impact on patient care and the overall efficiency of the health-care system.

Wise's billionaire CEO fined £350,000 by UK regulators over failure to report tax issue

Kristo Käärmann, CEO and co-founder of Wise, was ordered by regulators to pay a £350,000 fine due to a breach of senior manager conduct rules.

#newsonleo #wise #taxes #technology

Kristo Käärmann, the billionaire CEO of money transfer firm Wise, was slapped with a £350,000 ($454,000) fine by financial regulators in the U.K for failing to report an issue with his tax filings.

Käärmann, who co-founded Wise in 2011 with fellow entrepreneur Taavet Hinrikus, was on Monday ordered by the Financial Conduct Authority (FCA) to pay the sizable penalty due to a breach of the watchdog's senior manager conduct rule.

The FCA said that Käärmann failed to notify the regulator about him not paying a capital gains tax liability when he cashed in on shares worth £10 million in 2017.

The watchdog found him in breach of its Senior Management Conduct Rule 4, which states: "You must disclose appropriately any information of which the FCA would reasonably expect notice."

Improved satellite production with INTAMSYS AM technologies

French governmental space agency, the National Centre for Space Studies (CNES) is utilizing INTAMSYS’ AM technologies to enhance its space technology capabilities. The Realization and Integration (RI)

#3dprinting #satellite #technology

French governmental space agency, the National Centre for Space Studies (CNES) is utilizing INTAMSYS’ AM technologies to enhance its space technology capabilities.

The Realization and Integration (RI) department of the CNES specializes in the assembly and testing of satellites, as well as developing tools and means to facilitate the assembly and testing in CNES’ clean rooms before launch (e.g. integration frames, multi-purpose trolley, lifting device, etc.). These clean rooms maintain precise conditions to ensure a stable environment for the satellite components.

“The impressive ease of use and high print quality of INTAMSYS 3D Printers have greatly contributed to our workflow by meeting our challenges perfectly. Now, we can utilize the entire range of INTAMSYS materials with a click-and-print functionality. Additionally, CADvision, INTAMSYS Partner, is highly responsive and provides a great local support, allowing for joint development of improvements to both the machine and software, further enhancing the efficiency and workflow,” says Theodore Froissart, Mechanical Integration Manager at The National Centre for Space Studies.

Founded in 1961, the CNES conducts research, designs, and operates space missions, and promotes the development of space technologies within Europe and internationally.

3D printing for faster satellite tooling

To produce these tools and means such as integration frames, multi-purpose trolley, the use of FFF 3D printing method with innovative polymer materials has been increasingly employed at the CNES. Initially starting with a single-material printer in 2014, the CNES’ additive manufacturing laboratory has rapidly evolved due to high demand and the number of parts to be produced.

According to the company, the INTAMSYS FUNMAT PRO 610HT is the first INTAMSYS 3D printer that has been integrated into CNES’ space studies. Later on, FUNMAT PRO 410 was also added recently to complement its current range of printers within the same laboratory.

Before integrating additive manufacturing, CNES faced several challenges with traditional manufacturing methods, particularly in qualifying materials for use in clean rooms and satellite testing. Additionally, the long design and manufacturing cycles required to produce a tool was impacting the efficiency.

By leveraging 3D printing, CNES is able to streamline this process to a single day, enabling rapid prototyping and design iterations, which ease the process of testing and satellite assembly.

According to the company, the adoption of the INTAMSYS FUNMAT PRO 610HT has allowed CNES to use any material it wants, such as PolyCarbonate, PEEK and ULTEM, crucial for manufacturing complex tools required for satellite testing.

The capabilities of the FUNMAT PRO 610HT have significantly improved the efficiency. Even when using highly specific materials such as PEEK-ESD, developed by the European Space Agency (ESA), CNES can still use the INTAMSYS PEEK profile in INTAMSUITE NEO (INTAMSYS’s slicer) by adjusting a few parameters only. This small adjustment allows the first parts to be ready for printing.

CNES explores novel applications in satellite testing

The applications that CNES has developed are varied. These range from simple clean room tools to complex structural tooling for satellite testing, including thermal cycling, shock, and vibration tests before launch.

The stratospheric drone structure is one of these complex aerospace applications that requires ideal conditions for vacuum chambers and testing environments. The part was printed in one piece using ULTEMTM 9085 material on the FUNMAT PRO 610HT 3D printer. Thanks to the 3D printer’s 610 x 508 x 508 mm build volume and 300°C constant chamber temperature, facilitated by a high-temperature thermal system, the part was accurately 3D printed.

CNES says, the stratospheric drone structure has been designed to be placed under stratospheric balloons for validation tests. Consequently, CNES had to test the part in different environments, simulating conditions with thin to no atmosphere, to ensure suitability for extreme conditions.

During the prototype testing process, the design was qualified by mechanical engineers at CNES. While in the same test, the material ULTEMTM 9085 was also qualified, confirming its compatibility inside vacuum chambers without outgassing, a crucial element for maintaining an ideal optic performance.

In addition to the low outgassing properties, ULTEMTM for aerospace is also crucial due to its exceptional strength-to-weight ratio and high thermal resistance, making it ideal for manufacturing components subjected to extreme conditions in space.

Another notable project where 3D printing has been used is the MMX Rover, an alliance between CNES, the Japan Aerospace Exploration Agency (JAXA), and the German Aerospace Center (DLR). MMX, short for Martian Moons eXploration, is a small rover designed to explore Mars’ largest moon, Phobos. For this project, the team is utilizing 3D printing for creating, assembling, and testing the rover’s parts.

Moving forward, CNES intends to enhance its additive manufacturing technology while striving to maximize the benefits of 3D printing for space exploration.

The Invention of the Internet

Unlike technologies such as the phonograph or the safety pin, the internet has no single “inventor.” Instead, it has evolved over time. The internet got its start in the United States in the late 1960s as a military defense system in the Cold War. For years, scientists and researchers used it to communicate and share data with one another. Today, we use the internet for almost everything, and for many people it would be impossible to imagine life without it.

#internet #technology #history

The Sputnik Scare

On October 4, 1957, the Soviet Union launched the world’s first artificial satellite into orbit. The satellite, known as Sputnik, did not do much: It relayed blips and bleeps from its radio transmitters as it circled the Earth. Still, to many Americans, the beach ball-sized Sputnik was proof of something alarming: While the U.S. economy was booming and its consumer technologies were advancing, the Soviets had been focusing on training scientists—and were positioned to win the Space Race, and possibly the Cold War, because of it.

After Sputnik’s launch, many Americans began to think more seriously about science and technology. Schools added courses on subjects like chemistry, physics and calculus. Universities and corporations took government grants and invested them in scientific research and development. And the federal government itself formed new agencies, such as the National Aeronautics and Space Administration (NASA) and the Department of Defense’s Advanced Research Projects Agency (ARPA), to develop space-age technologies such as rockets, weapons and computers.

The Birth of the ARPAnet

Scientists and military experts were especially concerned about what might happen in the event of a Soviet attack on the nation’s telephone system. Just one missile, they feared, could destroy the whole network of lines and wires that made efficient long-distance communication possible.

In 1962, a scientist from ARPA named J.C.R. Licklider proposed a solution to this problem: a “intergalactic network” of computers that could talk to one another. Such a network would enable government leaders to communicate even if the Soviets destroyed the telephone system.

In 1965, Donald Davies, a scientist at Britain’s National Physical Laboratory developed a way of sending information from one computer to another that he called “packet switching.” Packet switching breaks data down into blocks, or packets, before sending it to its destination. That way, each packet can take its own route from place to place. Without packet switching, the government’s computer network—now known as the Arpanet—would have been just as vulnerable to enemy attacks as the phone system.

'LOGIN'

On October 29, 1969, Arpanet delivered its first message: a “node-to-node” communication from one computer to another. (The first computer was located in a research lab at UCLA and the second was at Stanford; each one was the size of a large room.) The message—“LOGIN”—was short and simple, but it crashed the fledgling Arpanet anyway: The Stanford computer only received the note’s first two letters.

The Network Grows

By the end of 1969, just four computers were connected to the Arpanet, but the network grew steadily during the 1970s.

In 1972, it added the University of Hawaii’s ALOHAnet, and a year later it added networks at London’s University College and the Norwegian Seismic Array. As packet-switched computer networks multiplied, however, it became more difficult for them to integrate into a single worldwide “internet.”

By the mid-1970s, a computer scientist named Vinton Cerf had begun to solve this problem by developing a way for all of the computers on all of the world’s mini-networks to communicate with one another. He called his invention “Transmission Control Protocol,” or TCP. (Later, he added an additional protocol, known as “Internet Protocol.” The acronym we use to refer to these today is TCP/IP.) One writer describes Cerf’s protocol as “the ‘handshake’ that introduces distant and different computers to each other in a virtual space.”

The World Wide Web

Cerf’s protocol transformed the internet into a worldwide network. Throughout the 1980s, researchers and scientists used it to send files and data from one computer to another. However, in 1991 the internet changed again. That year, a computer programmer working at the CERN research center on the Swiss-French border named Tim Berners-Lee introduced the World Wide Web: an internet that was not simply a way to send files from one place to another but was itself a “web” of linked information that anyone on the Internet could retrieve. Berners-Lee created the Internet that we know today.

In 1992, a group of students and researchers at the University of Illinois developed a sophisticated browser that they called Mosaic. (It later became Netscape.) Mosaic offered a user-friendly way to search the Web: It allowed users to see words and pictures on the same page for the first time and to navigate using scrollbars and clickable links.

That same year, Congress authorized the National Science Foundation to connect the country’s research- and education-focused internet services to commercial networks. As a result, companies of all kinds hurried to set up websites of their own, and e-commerce entrepreneurs began to use the internet to sell goods directly to customers. By the 2000s, companies including Amazon and eBay emerged as dominant players in the online retail space.

In the first decade of the 2000s, social media platforms such as Facebook, Twitter and Instagram emerged, changing the way people connected, created and shared content. By around 2015, more people accessed the internet from smartphones than from other kinds of computers. By the early 2020s, companies, including OpenAI, Google, Microsoft and others starting rolling out advanced artificial intelligence systems to the public.

ADVANCED RESEARCH PROJECTS AGENCY

Washington 25, D.C. April 23, 1963

MEMORANDUM FOR: Members and Affiliates of the Intergalactic Computer Network

FROM: J. C. R. Licklider

SUBJECT: Topics for Discussion at the Forthcoming Meeting

First, I apologize humbly for having to postpone the meeting scheduled for 3 May 1963 in Palo Alto. The ARPA Command & Control Research office has just been assigned a new task that must be activated immediately, and I must devote the whole of the coming week to it. The priority is externally enforced. I am extremely sorry to inconvenience those of you who have made plans for May 3rd. Inasmuch as I shall be in Cambridge the rest of this week, I am asking my colleagues here to re-schedule the meeting, with May 10th, Palo Alto, as target time and place.

The need for the meeting and the purpose of the meeting are things that I feel intuitively, not things that I perceive in clear structure. I am afraid that that fact will be too evident in the following paragraphs. Nevertheless, I shall try to set forth some background material and some thoughts about possible interactions among the various activities in the overall enterprise for which, as you may have detected in the above subject, I am at a loss for a name.

In the first place, it is evident that we have among us a collection of individual (personal and/or organizational) aspirations, efforts, activities, and projects. These have in common, I think, the characteristics that they are in some way connected with advancement of the art or technology of information processing, the advancement of intellectual capability (man, man-machine, or machine), and the approach to a theory of science. The individual parts are, at least to some extent, mutually interdependent. To make progress, each of the active research needs a software base and a hardware facility more complex and more extensive than he, himself, can create in reasonable time.

In pursuing the individual objectives, various members of the group will be preparing executive the monitoring routines, languages AMD [sic.] compilers, debugging systems and documentation schemes, and substantive computer programs of more or less general usefulness. One of the purposes of the meeting–perhaps the main purpose–is to explore the possibilities for mutual advantage in these activities–to determine who is dependent upon whom for what and who may achieve a bonus benefit from which activities of what other members of the group.

It will be necessary to take into account the costs as well as the values, of course. Nevertheless, it seems to me that it is much more likely to be advantageous than disadvantageous for each to see the others’ tentative plans before the plans are entirely crystalized. I do not mean to argue that everyone should abide by some rigid system of rules and constraints that might maximize, for example, program interchangeability.

But, I do think that we should see the main parts of the several projected efforts, all on one blackboard, so that it will be more evident than it would otherwise be, where network-wide conventions would be helpful and where individual concessions to group advantage would be most important.

It is difficult to determine, of course, what constitutes “group advantage.” Even at the risk of confusing my own individual objectives (or ARPA’s) with those of the “group,” however, let me try to set forth some of the things that might be, in some sense, group or system or network desiderata.

There will be programming languages, debugging languages, time-sharing system control languages, computer-network languages, data-base (or file-storage-and-retrieval languages), and perhaps other languages as well. It may or may not be a good idea to oppose or to constrain lightly the proliferation of such. However, there seems to me to be little question that it is desireable to foster “transfer of training” among these languages. One way in which transfer can be facilitated is to follow group consensus in the making of the arbitrary and nearly-arbitrary decisions that arise in the design and implementation of languages. There would be little point, for example, in having a diversity of symbols, one for each individual or one for each center, to designate “contents of” or “type the contents of.”

It seems to me desirable to have as much homogenity as can reasonably be achieved in the set of sub-languages of a given language system–the system, for example, of programming, debugging, and time-sharing–control lanugages related to JOVIAL on the Q-32, or the system related to Algol (if such were developed and turned out to be different from the JOVIAL set) for the Q-32 computer, or the set related to FORTRAN for a 7090 or a 7094.

Dictating the foregoing paragraph led me to see more clearly than I had seen it before that the problem of achieving homogeneity within a set of correlated languages is made difficult by the fact that there will be, at a given time, only one time-sharing system in operation on a given computer, whereas more than one programming language with its associated debugging language may be simultaneously in use. The time-sharing control language can be highly correlated only with one programming and debugging language pair.

Insofar as syntax is concerned, therefore, it seems that it may be necessary to have a “preferred” language for each computer facility or system, and to have the time-sharing control language be consistent with the preferred. Insofar as semantics is concerned–or, at least, insofar as the association of particular symbols with particular control functions is concerned–I see that it would be possible, thought perhaps inconvenient, to provide for the use, by several different operators, of several different specific vocabularies. Anyway, there seems to me to be a problem, or a set of problems, in this area.

There is an analogous problem, and probably a more difficult one, in the matter of language for the control of a network of computers. Consider the situation in which several different centers are netted together, each center being highly individualistic and having its own special language and its own special way of doing things. Is it not desirable, or even necessary for all the centers to agree upon some language or, at least, upon some conventions for asking such questions as “What language do you speak?” At this extreme, the problem is essentially the one discussed by science fiction writers: “how do you get communications started among totally uncorrelated “sapient” beings?”

But, I should not like to make an extreme assumption about the uncorellatedness. (I am willing to make an extreme assumption about the sapience.) The more practical set of questions is: Is the network control language the same thing as the time-sharing control language? (If so, the implication is that there is a common time-sharing control language.) Is the network control language different from the time-sharing control language, and is the network-control language common to the several netted facilities? Is there no such thing as a network-control language? (Does one, for example, simply control his own computer in such a way as to connect it into whatever part of the already-operating net he likes, and then shift over to an appropriate mode?)

In the foregoing paragraphs, I seem to have lept into the middle of complexity. Let me approach from a different starting point. Evidently, one or another member of this enterprise will be preparing a compiler, or compilers, for modifying existing programs that compile FORTAN [sic.], JOVIAL, ALGOL, LISP and IPL-V (or V-l, or V-ll). If there is more than one of any one of the foregoing, or of any one of others that I do not foresee, then it seems worthwhile to examine the projected efforts for compatibility. Moreover, to me, at least, it seems desireable to examine the projected efforts to see what their particular features are, and to see whether there is any point in defining a collection of desireable features and trying to get them all into one language and one system of compilers.

I am impressed by the argument that list-structure features are important as potential elements of ALGOL or JOVIAL, that we should think in terms of incorporating list-structure features into existing languages quite as much as in terms of constructing languages around list-structures.

It will possibly turn out, I realize, that only on rare occasions do most or all of the computers in the overall system operate together in an integrated network. It seems to me to be interesting and important, nevertheless, to develop a capability for integrated network operation. If such a network as I envisage nebulously could be brought into operation, we would have at least four large computers, perhaps six or eight small computers, and a great assortment of disc files and magnetic tape units–not to mention the remote consoles and teletype stations–all churning away.

It seems easiest to approach this matter from the individual user’s point of view–to see what he would like to have, what he might like to do, and then to try to figure out how to make a system within which his requirements can be met. Among the things I see that a user might want to have, or to do, are the following:

(Let me suppose that I am sitting at a console that includes a cathode-ray-tube display, light-pen, and a typewriter.) I want to retrieve a set of experimental data that is on a tape called Listening Test. The data are called “experiment 3.” These data are basically percent- ages for various signal-to-noise ratios. There are many such empirical functions. The experiment had a matrix design, with several listeners, several modes of presentation, several signal frequencies, and several durations.

I want, first, to fit some “theoretical” curves to the measured data. I want to do this in a preliminary way to find out what basic function I want to choose for the theoretical relation between precentage [sic.] and signal-to-noise ratio. On another tape, called “Curve Fitting,” I have some routines that fit straight lines, power functions, and cumulative normal curves. But, I want to try some others, also. Let me try, at the beginning, the functions for which I have programs. The trouble is, I do not have a good grid-plotting program.

I want to borrow one. Simple, rectangular coordinates will do, but I would like to specify how many divisions of each scale there should be and what the labels should be. I want to put that information in through my typewriter . Is there a suitable grid-plotting program anywhere in the system? Using prevailing network doctrine, I interrogate first the local facility, and then other centers. Let us suppose that I am working at SDC, and that I find a program that looks suitable on a disc file in Berkeley. My programs were written in JOVIAL.

The programs I have located throught the system were written in FORTRAN. I would like to bring them in as relocatable binary programs and, using them as subroutines, from my curve-fitting programs, either at “bring-in time” or at “run-time.”

Supposing that I am able to accomplish the steps just described, let us proceed. I find that straight lines, cubics, quintics, etc., do not provide good fits to the data. The best fits look bad when I view them on the oscilloscope.

The fits of the measured data to the cumulative normal curve are not prohibitively bad. I am more interested in finding a basic function that I can control appropriately with a few perimeters than I am in making contact with any particular theory about the detection process, so I want to find out merely whether anyone in the system has any curve- fitting programs that will accept functions supplied by the user or that happen to have built-in functions roughly like the cumulative normal curve, but assymmetrical.

Let us suppose that I interrogate the various files, or perhaps interrogate a master-integrated, network file, and find out that no such programs exist. I decide, therefore, to go along with the normal curve.

At this point, I have to do some programming. I want to hold on to my data, to the programs for normal curve fitting, and to display programs that I borrowed. What I want to do is to fit cumulative normal curves to my various sub-sets of data constraining the mean and the variance to change slowly as I proceed along any of the ordinal or ratio- scale dimensions of my experiment, and permitting slighly different sets of perimeters for the various subjects.

So, what I want to do next is to create a kind of master program to set perimeter values for the curve-fitting routines, and to display both the graphical fits and the numerical measures of goodness to fit as, with light-pen and graphics of perimeters versus independent variables on the oscilliscope screen, I set up and try out various (to me) reasonable configurations. Let us say that I try to program repeatedly on my actual data, with the subordinate programs already mentioned, until I get the thing to work.

Let us suppose that I finally do succeed, that I get some reasonable results, photograph the graphs showing both the empirical data and the “theoretical” curves, and retain for future use the new programs. I want to make a system of the whole set of programs and store it away under the name “Constrained-perimeter Normal-curve-fitting System.”

But, then suppose that my intuitively natural way of naming the system is at odds with the General guidelines of the network for naming programs. I would like to have this variance from convention called to my attention, for I am a conscientious “organization man” when it comes to matters of program libraries and public files of useful data.

In the foregoing, I must have exercised several network features. I engaged in information retrieval through some kind of system that looked for programs to meet certain requirements I had in mind. Presumably, this was a system based upon descriptors, or reasonable facsimiles thereof, and not in the near future, upon computer appreciation of natural language.

However, it would be pleasant to use some of the capabilities of avant-garde linguistics. In using the borrowed programs, I effected some linkages between my programs and the borrowed ones. Hopefully, I did this without much effort–hopefully, the linkages were set up–or the basis for making them was set up–when the programs were brought into the part of the stytem [sic.] that I was using. I did not borrow any data, but that was only because I was working on experimental data of my own. If I had been trying to test some kind of a theory, I would have wanted to borrow data as well as programs.

When the computer operated the programs for me, I suppose that the activity took place in the computer at SDC, which is where we have been assuming I was. However, I would just as soon leave that on the level of inference. With a sophisticated network-control system, I would not decide whether to send the data and have them worked on by programs somewhere else, or bring in programs and have them work on my data. I have no great objection to making that decision, for a while at any rate, but, in principle, it seems better for the computer, or the network, somehow, to do that. At the end of my work, I filed some things away, and tried to do it in such a way that they would be useful to others. That called into play, presumably, some kind of a convention-monitoring system that, in its early stages, must almost surely involve a human criterian as well as maching [sic.] processing.

The foregoing (unfortunately long) example is intended to be a kind of example of example. I would like to collect, or see someone collect, a considerable number of such examples, and to see what kind of software and hardware facilities they imply. I have it well in mind that one of the implications of a considerable number of such examples would be a very large random-access memory.

Now, to take still another approach to this whole matter, let me string-together a series of thoughts that are coming to mind. (I was interrupted at this point, and the discussion almost has to take a turn.) First, there is the question of “pure procedure.” I understand that the new verion of JOVIAL is going to compile programs in “pure-procedure” style.

Will the other compilers at the other centers do likewise? Second, there is the question of the interpretation, at one center, of requests directed to it from another center. I visualize vaguely some kind of an interpretive system that would serve to translate the incoming language into commands or questions of the form in terms of which the interrogated center operates. Alternatively, of course, the translation could be done at the sending end. Still alternatively, the coordination could be so good that everybody spoke a common language and used a common set of formates. Third, there is the problem of protecting and updating public files. I do not want to use material from a file that is in the process of being changed by someone else. There may be, in our mutual activities, something approximately analogous to military security classification. If so, how will we handle it?

Next, there is the problem of incremental compiling. Am I correct in thinking that Perlis, with his “threaded lists,” has that problem, and the related problem of com- pile-test-recompile, essentially solved?

Over on the hardware side, I am worried that the boundry- registered problem, or more generally the memory-protection problem, may be expensive to solve on the Q-32 and both difficult and expensive to solve on other machines, and I am worried that the problem of swapping or transferring information between core and secondary memory will be difficult and expensive on 7090s and 7094s–and I worry that time-sharing will not be much good without fast swaps or transfers. What are the best thoughts on these questions? In what state are our several or collective plans?

Implicit in the long example was the question of linking subroutines at run time. It is easy to do the calling, itself, through a simple directory, but it seems not to be so simple to handle system variables. Maybe it is simple in principle and perhaps I should say that it seems possibly infeasible to handle the linking of the system variables at run time through tables or simple addressing schemes.

It is necessary to bring this opus to a close because I have to go catch an airplane. I had intended to review ARPA’s Command-and-Control interests in improved man-computer interaction, in time-sharing and in computer networks. I think, however, that you all understnad [sic.] the reasons for ARPA’s basic interest in these matters, and I can, if need be, review them briefly at the meeting. The fact is, as I see it, that the military greatly needs solutions to many or most of the problems that will arise if we tried to make good use of the facilities that are coming into existence.

I am hoping that there will be, in our individual efforts, enought evident advantage in cooperative programming and operation to lead us to solve th problems and, thus, to bring into being the technology that the military needs. When problems arise clearly in the military context and seem not to appear in the research context, then ARPA can take steps to handle them on an ad hoc basis. As I say, however, hopefully, many of the problems will be essentially as important, in the research context as in the military context.

In conclusion, then, let me say again that I have the feeling we should discuss together at some length questions and problems in the set to which I have tried to point in the foregoing discussion. Perhaps I have not pointed to all the problems. Hopefully, the discussion may be a little less rambling than this effort that I am now completing.

Who Invented the Internet?

As you might expect for a technology so expansive and ever-changing, it is impossible to credit the invention of the Internet to a single person. The internet was the work of dozens of pioneering scientists, programmers and engineers who each developed new features and technologies that eventually merged to become the “information superhighway” we know today.

Long before the technology existed to actually build the internet, many scientists had already anticipated the existence of worldwide networks of information. Nikola Tesla toyed with the idea of a “world wireless system” in the early 1900s, and visionary thinkers like Paul Otlet and Vannevar Bush conceived of mechanized, searchable storage systems of books and media in the 1930s and 1940s.

Still, the first practical schematics for the internet would not arrive until the early 1960s, when MIT’s J.C.R. Licklider popularized the idea of an “Intergalactic Network” of computers. Shortly thereafter, computer scientists developed the concept of “packet switching,” a method for effectively transmitting electronic data that would later become one of the major building blocks of the internet.

The first workable prototype of the Internet came in the late 1960s with the creation of ARPANET, or the Advanced Research Projects Agency network. Originally funded by the U.S. Department of Defense, ARPANET used packet switching to allow multiple computers to communicate on a single network.

On October 29, 1969, ARPAnet delivered its first message: a “node-to-node” communication from one computer to another. (The first computer was located in a research lab at UCLA and the second was at Stanford; each one was the size of a small house.) The message—“LOGIN”—was short and simple, but it crashed the fledgling ARPA network anyway: The Stanford computer only received the note’s first two letters.

The technology continued to grow in the 1970s after scientists Robert Kahn and Vinton Cerf developed Transmission Control protocol and Internet Protocol, or TCP/IP, a communications model that set standards for how data could be transmitted between multiple networks.

ARPANET adopted TCP/IP on January 1, 1983, and from there researchers began to assemble the “network of networks” that became the modern Internet. The online world then took on a more recognizable form in 1990, when computer scientist Tim Berners-Lee invented the World Wide Web. While it’s often confused with the internet itself, the web is actually just the most common means of accessing data online in the form of websites and hyperlinks.

The web helped popularize the internet among the public, and served as a crucial step in developing the vast trove of information that most of us now access on a daily basis.

New Tesla Model Y ‘Juniper’ design changes revealed in China

#tesla #Modely #juniper #china

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

9:06
is the fact that tesler has done some
9:08
marketing as well getting the word out
9:09
there more I also think Word of Mouth
9:11
helps there's a lot of people in China
9:13
that I spoke to in person who I said to
9:15
them why have you got that car they were
9:17
like well because it's good thanks for
9:20
watching
9:21
[Music]

3 years after turning Facebook into Meta, Mark Zuckerberg's real win is AI

While Meta's metaverse dreams have yet to come true, the company's artificial intelligence efforts are paying off

Facebook (META) began as a digital college yearbook, connecting Harvard students face to face. Three years ago on Monday, Mark Zuckerberg rebranded his social media empire as Meta, betting billions on a future where we’d meet in virtual worlds instead.

#facebook #metaverse #bigtech #technology #markzuckerberg #meta

The Metaverse Has Dimmed: How Meta's Pivot to AI Has Changed the Game

In October 2021, Mark Zuckerberg, the CEO of Meta, stood on stage at the Facebook Connect conference, proclaiming that the metaverse was the next frontier for his company. He envisioned a future where the metaverse would reach a billion people, conduct "hundreds of billions" of dollars worth of commerce, and employ millions – all within the decade. However, three years later, the metaverse has taken a backseat to a new shiny object: artificial intelligence (AI).

Meta has invested over $63 billion in Reality Labs, its division for virtual and augmented reality technology, but the results have been lukewarm. The company's AI research division, led by pioneer Yann LeCun, has been advancing the field, and the pivot to AI has already paid dividends. In July, Meta reported stronger-than-expected sales, crediting AI improvements in ad targeting. The company is now rolling out AI tools to help marketers enhance their listings.

Zuckerberg's public messaging has increasingly focused on the transformative potential of generative AI, and the results are evident. Meta's stock price has almost tripled since last year and is up more than 60% in 2024, hitting an all-time closing high of $595.94 per share earlier this month.

The metaverse, once touted as the future of the Internet, has dimmed in comparison. Global shipments of VR and AR headsets have sunk roughly 28% since last September, with growth expected in 2025. However, Meta's Ray-Ban smart glasses have found success, with more than 730,000 units sold in their first three quarters. The company's recent triumph with Ray-Ban smart glasses has paved the way for an even more ambitious project: Orion. This cutting-edge eyewear prototype showcases how AI could power next-generation augmented reality, with features like real-time 3D mapping and advanced scene understanding.

But not everyone is convinced that the pivot to AI is a sustainable strategy. Gene Munster, a managing partner at Deep water Management, notes that the company's "drunken sailor" spending on metaverse projects may not be sustainable alongside the growing costs of generative AI development. While Meta doesn't need to completely abandon Reality Labs, Munster says the next couple of years are crucial: Either the hardware needs to advance enough to prove the opportunity is real, or the spending needs to be redirected.

The future of the internet is indeed 3D, but it's not just about VR headsets and virtual worlds. Meta's vision is about the evolution of the internet itself into a 3D medium spanning virtual reality, augmented reality, and traditional screens. As Matthew Ball, an entrepreneur and author of "The Metaverse: And How It Will Revolutionize Everything," notes, "The metaverse is not limited to and does not even require virtual reality. Most people NOW believe we will use no term to describe it whatsoever – we'll just call it the 3D internet."

However, some critics argue that Meta's pivot to AI is just a new shiny diversion, rather than a genuine attempt to address the company's past mistakes. Sara M. Watson, a technology critic and independent industry analyst, notes that the company's aggressive competition to build cutting-edge AI is reminiscent of its "move fast and break things" approach that led to congressional mea culpas and the name change from Facebook to Meta.

As Meta continues to invest in AI, it's clear that the company is building a new ecosystem that could entrench its power. If Llama becomes the go-to infrastructure for AI tools, that would give Zuckerberg the same kind of control over generative AI that he once held over social networking. As Watson notes, "You build a thing that everyone needs, and they build on top of that. Then you become essential no matter how you decide to monetize."

In the end, the metaverse may have dimmed, but the future of the internet is still being shaped by Meta's pivot to AI. Whether this new direction will prove sustainable or just another diversion remains to be seen.

The Evolution of Meta: From Facebook to the Metaverse

In 2004, Mark Zuckerberg launched Facebook from his college dorm, and no one could have predicted how big it would become. It started as a simple platform for Harvard students to connect, but it quickly grew into one of the biggest social networks in the world. Over the years, Facebook expanded by buying other popular apps like Instagram and WhatsApp. Then in 2021, it made a big change: it rebranded as Meta. This new name reflected the company’s ambition to go beyond social media and dive into the future of the digital world the metaverse.

This article will explore how Meta grew from Facebook and how it’s working to bring the metaverse to life.

#meta #metaverse #facebook #technology #mixedreality

Facebook: A Social Revolution

Before becoming Meta, Facebook changed the way we communicate online. It allowed people to stay in touch, share moments, and connect with others around the world. Facebook became more than just a place to talk to friends it became a tool for businesses, brands, and communities to reach people globally.

But as Facebook grew, it faced challenges. There were concerns about privacy, fake news, and political issues. Despite these problems, Facebook continued to dominate social media, especially after it bought Instagram in 2012 and WhatsApp in 2014.

Facebook: A Social Revolution

Before becoming Meta, Facebook changed the way we communicate online. It allowed people to stay in touch, share moments, and connect with others around the world. Facebook became more than just a place to talk to friends it became a tool for businesses, brands, and communities to reach people globally.

But as Facebook grew, it faced challenges. There were concerns about privacy, fake news, and political issues. Despite these problems, Facebook continued to dominate social media, especially after it bought Instagram in 2012 and WhatsApp in 2014.

The Move to Virtual Reality: Buying Oculus

In 2014, Facebook made a big move that signaled its future direction it bought Oculus, a company that specialized in virtual reality (VR). With Oculus, users could step into immersive digital worlds using VR headsets. At the time, VR was mostly popular with gamers, but Zuckerberg saw its potential to change how we interact with technology and each other. This was the start of Facebook’s shift toward what would later become the metaverse.

Why the Switch to Meta?

In October 2021, Facebook officially changed its name to Meta. But why? There were a few reasons:

Slowing Growth: By 2021, Facebook wasn’t growing as fast as it used to, especially with younger people who preferred apps like TikTok and Snapchat.

Negative Image: Over the years, Facebook became linked to scandals like data privacy issues and the spread of fake news. By rebranding to Meta, the company aimed to move past these problems and start fresh.

A New Vision: Zuckerberg and his team had a new goal: the metaverse. This is a virtual world where people could socialize, work, play, and shop all in a shared digital space. The name “Meta” symbolized this shift from being just a social media platform to something much bigger.

What Is the Metaverse?

The metaverse is a digital universe where people can interact with each other and virtual environments in real time. Think of it as a 3D version of the internet, where you could do things like attend meetings, play games, or shop, all while being fully immersed in a digital world.

Meta’s goal is to create a metaverse that blends virtual reality (VR), augmented reality (AR), and social media, allowing people to do everyday things in a shared virtual space. While the idea sounds futuristic, it’s still in the early stages of development.

Other companies like Microsoft and Google are also working on their own versions of the metaverse, making it a competitive race to see who can build it first.

Meta’s Metaverse Vision

To build the metaverse, Meta is focusing on a few key areas:
Horizon Worlds: This is Meta’s main virtual reality platform, where users can create and explore digital environments, interact with others, and experience a taste of what the metaverse could look like.

Oculus VR Headsets: These headsets, now called Meta Quest, are a key part of the metaverse experience. They allow users to step into 3D environments and explore virtual worlds.

Avatars: Meta is working on customizable avatars that represent users in the metaverse, allowing people to express their personalities and interact in virtual spaces.

Augmented Reality (AR): AR blends the digital world with the real world. Meta’s AR glasses, which are still being developed, will let users experience digital content overlaid on their physical surroundings.

Challenges Meta Faces

While the idea of the metaverse is exciting, Meta has a lot of hurdles to overcome:
Privacy Concerns: Meta has faced criticism over its handling of user data in the past, and the metaverse raises new concerns about how people’s personal information will be protected in this new digital space.

Tech Limitations: The technology needed for the metaverse, like VR headsets and fast internet, is still expensive and not available to everyone. Creating a fully connected virtual universe will require advances in hardware and infrastructure.

Competitors: Meta isn’t the only company working on the metaverse. Other tech giants like Microsoft and Google are developing their own versions, which means Meta has tough competition.

What’s Next for Meta and the Metaverse?

Meta is betting big on the future of the metaverse, pouring billions of dollars into developing this digital world. Whether or not the metaverse will become as popular as social media is today, one thing is clear: Meta is no longer just about connecting people online. It’s aiming to transform how we live, work, and play in the digital world.

Will the metaverse be the next big thing, or will it just be another tech experiment? Only time will tell, but Meta’s transformation from Facebook to a company building digital universes marks a major shift in its history and in the tech industry.

Exploring the Metaverse: A Deep Dive into Virtual Reality Social Platforms

Introduction:

In the ever-evolving landscape of technology, the concept of the Metaverse has taken center stage, promising a new dimension to our digital interactions. At the intersection of virtual reality and social connectivity, Virtual Reality Social Platforms are emerging as the gateway to this immersive digital universe. In this deep dive, we’ll unravel the layers of the Metaverse, examining the impact of Virtual Reality Social Platforms on the way we connect, communicate, and socialize in the digital realm.

#vr #meta #technology #metaverse #vr

The Essence of the Metaverse:

Defining the Metaverse:
The term “Metaverse” refers to a collective virtual shared space, merging augmented reality, virtual reality, and the internet. It transcends the boundaries of traditional online spaces, offering users a dynamic, immersive, and interconnected digital experience. Virtual Reality Social Platforms serve as the portals to this expansive Metaverse, redefining how we engage with others in a digital environment.

Beyond Virtual Reality:

The Social Aspect:
While Virtual Reality is the driving force behind these platforms, the focus is not solely on the technology itself. Virtual Reality Social Platforms emphasize the social aspect, aiming to recreate and enhance the nuances of real-world interactions. From attending virtual events to collaborating on projects, these platforms seek to replicate the richness of face-to-face communication within a digital space.

The Rise of Virtual Reality Social Platforms:

Virtual Gatherings: Events in the Digital Realm:
The emergence of Virtual Reality Social Platforms marks a paradigm shift in the way we attend events. From virtual conferences and concerts to immersive meetups, these platforms enable users to gather in shared virtual spaces, transcending geographical barriers. Attending an event no longer requires physical presence; instead, users can don their VR headsets and immerse themselves in a collective digital experience.

Collaborative Workspaces:

Redefining Remote Collaboration:
In the era of remote work, Virtual Reality Social Platforms are transforming how teams collaborate. Virtual offices, meeting rooms, and collaborative workspaces enable professionals to engage in a shared virtual environment, fostering a sense of presence and collaboration. The spatial aspect of these platforms adds a layer of realism, making virtual meetings more engaging and productive.

Key Features of Virtual Reality Social Platforms:

Avatars: The Digital Reflection
In Virtual Reality Social Platforms, users navigate the digital space through avatars – digital representations of themselves. These avatars can be customized to reflect users’ personalities, allowing for self-expression in the virtual realm. This digital embodiment adds a personal touch to interactions, making them more immersive and engaging.

Real-time Communication:

Breaking Barriers:
Virtual Reality Social Platforms prioritize real-time communication, breaking down the barriers of traditional messaging. Through spatial audio and expressive gestures, users can engage in natural conversations, replicating the dynamics of face-to-face communication. This emphasis on real-time interaction enhances the sense of presence and connectivity within the digital space.

Content Creation and Sharing:

A Creative Playground:
Beyond communication, Virtual Reality Social Platforms serve as creative playgrounds. Users can create, share, and interact with user-generated content in the form of virtual art, environments, and experiences. This collaborative aspect adds a layer of richness to the Metaverse, allowing users to contribute to the evolving digital landscape.

Social Challenges and Ethical Considerations:

Digital Identity and Privacy:
While Virtual Reality Social Platforms offer a new frontier for social interactions, they also raise concerns about digital identity and privacy. Users navigate the digital realm through avatars, prompting questions about the security of personal data and the potential for identity-related issues. Striking a balance between social engagement and safeguarding user privacy is a crucial consideration in the development of these platforms.

The Future of Virtual Reality Social Platforms:

Integration with Augmented Reality:
As technology advances, the integration of Augmented Reality (AR) into Virtual Reality Social Platforms is a promising avenue. This combination can enhance the blending of virtual and real-world elements, creating a seamless and interconnected digital experience. The evolution towards Mixed Reality further expands the possibilities for immersive social interactions.

Enhanced Interactivity through AI:

Artificial Intelligence (AI) is poised to play a pivotal role in enhancing interactivity within Virtual Reality Social Platforms. AI algorithms can analyze user behavior, adapt to preferences, and create dynamic, personalized experiences. This level of responsiveness contributes to a more engaging and immersive social environment within the Metaverse.

Expanded Use Cases:

Beyond Socializing:
The future of Virtual Reality Social Platforms extends beyond socializing. These platforms are likely to find applications in education, healthcare, and various professional fields. Virtual classrooms, collaborative medical simulations, and virtual conferences are just glimpses of the diverse use cases that could redefine how we learn, work, and collaborate in the digital age.

SEO Optimization:

Navigating the Digital Landscape:
As we explore the Metaverse and its impact on Virtual Reality Social Platforms, it’s essential to consider the role of Search Engine Optimization (SEO). These platforms, being digital spaces, can benefit from SEO strategies to enhance visibility and accessibility. Utilizing relevant keywords, creating engaging content, and optimizing for search algorithms can ensure that Virtual Reality Social Platforms reach a wider audience in the vast digital landscape.

Conclusion:

Virtual Reality Social Platforms are at the forefront of reshaping our digital interactions within the expansive Metaverse. From virtual gatherings and collaborative workspaces to the creative expression through avatars, these platforms offer a glimpse into the future of social connectivity. As technology continues to advance, ethical considerations and inclusivity will be pivotal in ensuring that the Metaverse remains a vibrant and accessible digital frontier for users worldwide. The journey into the Metaverse has just begun, and the possibilities for immersive social experiences are boundless in the ever-evolving realm of Virtual Reality Social Platforms.

What Is the Metaverse, Exactly?

  • The metaverse uses virtual reality and augmented reality to virtually transport you to a different place, or world.
  • Accessing the metaverse is as simple as putting on a virtual reality headset and holding a set of controllers.
  • While its biggest use at present is gaming, the metaverse will increasingly be used for shopping, education, job training, doctor’s appointments and socializing.
  • There are a number of ways to make money in the metaverse, including buying and selling virtual real estate, trading cryptocurrency and NFTs, and selling goods/products, both real-world and virtual.
  • Experts predict interactions in the metaverse will become commonplace in the next five to ten years.

#metaverse #technology #mixedreality #internet

The Metaverse: A Virtual World Coming to Life

For decades, humans have been fascinated by the idea of a digital, all-consuming, and futuristic realm. From the 1990s novel Snow Crash to the 2010s novel and movie Ready Player One, this concept has been referred to by various names, including the Metaverse, the Matrix, and OASIS. Now, in the 2020s, the term Metaverse is back, and it's feeling more real than ever. But what is the Metaverse, exactly? And how does it differ from virtual reality (VR)?

Defining the Metaverse

According to Chris Madsen, senior engineer for Engage, a professional virtual reality and augmented reality (AR) platform, the Metaverse can be thought of as the "universe" of the virtual world. It's founded on the Internet but is much more expansive, not owned by a single country or corporation. Think of it as the internet's next evolution, where everything that exists in the real world can be found and experienced in a virtual environment. The Metaverse is a vast, interconnected Network of virtual worlds, allowing users to explore, interact, and create their own experiences.

The Evolution of the Metaverse

The Metaverse is currently in its early stages, similar to the early days of the internet in the 1990s. websites were limited, and the technology was still developing. However, the Metaverse is evolving faster than ever, with advancements in VR, AR, and other technologies. The Metaverse is being shaped by the convergence of various technologies, including blockchain, artificial intelligence, and cloud computing. As these technologies continue to advance, the Metaverse will become more immersive, interactive, and accessible.

What is Virtual Reality (VR)?

Virtual Reality (VR) is a technology that creates an immersive, simulated environment that can be experienced and interacted with in a seemingly real or physical way. VR is a key component of the Metaverse, allowing users to enter and engage with virtual worlds. VR headsets, such as Oculus and Vive, are popular examples of VR technology. VR provides a way for users to experience the Metaverse, but it's just one tool among many that will be used to explore and interact within the virtual world.

Key Differences between the Metaverse and VR

While the Metaverse and VR are related, they are not the same thing. The Metaverse is a broader concept that encompasses multiple virtual worlds and experiences, whereas VR is a specific technology used to create immersive experiences within those worlds. Think of the Metaverse as the "world" and VR as a "tool" used to explore and interact within that world. The Metaverse is a platform that enables VR, AR, and other technologies to come together and create a seamless, interconnected experience.

The Future of the Metaverse

As the Metaverse continues to evolve, it's expected to change the way we live, work, and interact with each other. With the rise of Web3 and NFTs, the Metaverse is poised to become a major player in the tech industry. The Metaverse has the potential to revolutionize industries such as education, healthcare, and entertainment, and will likely create new opportunities for businesses and individuals alike.

While it may seem like a distant concept, the Metaverse is already here, and its impact will be felt in the years to come.

Conclusion

The Metaverse is a virtual world that's coming to life, and it's more than just a concept. With the help of VR, AR, and other technologies, the Metaverse is evolving faster than ever. Whether you're familiar with the term or not, it's essential to understand what the Metaverse is and how it will shape our future. As the Metaverse continues to grow and develop, it's likely to become an integral part of our daily lives, changing the way we interact with each other and the world around us.

What does metaverse mean?

There’s a reason for the confusion: There isn’t one simple definition of the metaverse, says Madsen. Most people think of it generally as a virtual place where people, companies or other entities can create their own virtual worlds. It’s an “extended reality,” which uses virtual reality and augmented reality to take you out of your real world and into a different, virtual world, Madsen explains.

But the word is currently being used in many different contexts in wildly different ways. For instance, the Forbes technology Council gave it an expansive definition as a Marvel-esque “multiverse of metaverses.”

For his part, Meta/Facebook CEO Mark Zuckerberg famously defined the metaverse as not a place at aLL, virtual or otherwise, but a time. “One definition of this [the metaverse] is it’s about a time when basically immersive digital worlds become the primary way that we live our lives and spend our time,” he said in a February 2022 interview.

(And don’t be confused: Despite Facebook rebranding in Oct. 2021 to Meta Platforms Inc., or just Meta for short, Meta isn’t the entire metaverse, just like Facebook isn’t the entire Internet.

Remember, “internet” didn’t mean much at first either, and eventually people settled on a universal understanding of the term. Over time the same will happen with metaverse (or whatever term becomes the popular choice), says Shannon.

Gaming

Currently the most popular use of virtual reality, games use the metaverse to create an immersive gaming experience. Computer and console-based games like World of Warcraft and Roblox are creating metaverse games, part of the future of immersive technology.

Shopping

The opportunity to make money via marketing and increased sales is what entices most companies to the metaverse, and it’s where lots of the tech development is currently focused. The goal is to provide a shopping experience even better than you could get in real life. For instance, you might try on clothing using a digital avatar that matches your real-world dimensions, letting you try on multiple dresses for that upcoming wedding without ever leaving home or messing up your hair. Similarly, you can go through a virtual Walmart, selecting items and adding them to your cart in a way that is clearer and faster than either a real-world shopping trip or the current online click-through experience. The physical goods are then delivered to your home.

Companies including Gucci (via The Sandbox), Ralph Lauren and Nike (via Roblox) and Balenciaga and Moncler (via Fortnite, see below) have all dabbled with storefronts in the metaverse. While they aren’t fully functional stores, the goal is to offer both physical goods and digital-only offerings, like NFTs, avatars and virtual clothing.

Job training

From teaching doctors how to perform surgery to the requisite safety trainings for new hires, the metaverse offers an easier and safer way to educate people. Here, you can practice first aid skills, learn complicated machinery and protocols, and take classes at a convenient time and placeand without endangering any real human bodies.

Education

The future of university classrooms lies in the metaverse, where anyone can learn cutting-edge information from the best professors around the world, says Madsen. In January 2022, Stanford University launched “Communication 166/266 Virtual People,” its first class hosted in the metaverse (students participate with Oculus 2 headsets), and other academic institutions are following suit. As of this writing, “metaversities” include Morehouse, Fisk, New Mexico State University, South Dakota State University, Florida A&M University, West Virginia University and the University of Maryland Global Campus.

Working remotely

Think Zoom is convenient? What if you could “appear” in a meeting room to collaborate with your colleagueswithout ever leaving your home (or your pajamas)? Virtual workspaces are cheaper and more accessible, and will eventually become ubiquitous.

Doctor’s appointments

Anything that doesn’t require directly touching the body, from therapy to medication checks, could be done in virtual doctors’ offices.

Travel

Check out museums across the world, hike through rainforests without damaging wildlife or even take part in space tourism via virtual travel portals without having to buy an expensive ticket.

Social activities

Social media goes next level in the metaverse, says Shannon. Not only can you share information, pictures and videos, you can play group games, chat in VR rooms or even go on a date.

Entertainment

In addition to games, the metaverse is perfect for other types of entertainment. For instance, virtual movie theaters provide a much better experience than home TVs provide. Big names including Ariana Grande, The Chainsmokers and Travis Scott have all hosted digital concerts in the metaversewhere people got a much better view than the nosebleed seats they might have had in real life, plus digital extras that made the concerts more immersive. But Madsen’s favorite? Virtual mini golf with friends!

How to access the metaverse

Accessing the Metaverse: A Comprehensive Guide to Hardware and software Requirements

As the metaverse continues to evolve and expand, accessing this virtual world requires a combination of specialized hardware and software. The specific requirements depend on what you want to do, but here's a detailed breakdown of what you need to get started.

Hardware Requirements

To access the metaverse, you'll need a range of hardware devices, including:

  • Phones: smartphones with high-quality cameras and processing power are essential for accessing the metaverse. Look for devices with at least 6GB of RAM, a quad-core processor, and a high-resolution camera.
  • Computers: Laptops or desktops with powerful processors and graphics cards are necessary for running metaverse applications. Consider devices with at least 8GB of RAM, an Intel Core i5 processor, and a dedicated graphics card.
  • Headsets: VR headsets are a crucial component for immersive metaverse experiences. Headsets range from budget-friendly options like cardboard versions ($30) to high-end sets with multiple cameras and sensory outputs ($1,000). Popular VR headset brands include Oculus, HTC, and Valve.
  • 3D screens: Some metaverse applications require 3D screens for an immersive experience. These screens can be integrated into VR headsets or used as standalone devices.
  • Gloves: Specialized gloves with sensors and haptic feedback can enhance the metaverse experience. These gloves can provide a more immersive and interactive experience, allowing you to feel tactile sensations and manipulate virtual objects.

Software Requirements

In addition to hardware, you'll need software to access the metaverse. This includes:

  • Games: Popular VR games require a VR headset and controllers. Look for games that are compatible with your headset and controller.
  • Programs: Specialized programs and applications are necessary for accessing specific metaverse platforms. These programs can range from simple browsers to complex software development kits (SDKs).
  • Operating systems: You'll need a compatible operating system to run metaverse applications. This can include Windows, macOS, or Linux.

The Fragmented Metaverse

Unlike traditional computing, there is no unified metaverse. Instead, each company is developing its own platform, headsets, and technology. Major players in the metaverse include:

Popular Metaverse Platforms

Some of the most popular platforms for accessing the metaverse include:

  • Meta (formerly Facebook)
  • Oculus (now owned by Meta)
  • Sony
  • HTC
  • Pico
  • Valve
  • Samsung

Augmented Reality and the Future of Accessing the Metaverse

While VR headsets are a key component of the metaverse, augmented reality (AR) is also an important aspect. AR experiences can be accessed through your phone screen and camera, think Snapchat filters or Pokémon Go. In the future, Madsen predicts that accessing the metaverse will be as simple as wearing a pair of eyeglasses. AR technology is expected to play a significant role in the development of the metaverse, allowing users to seamlessly transition between virtual and physical environments.

Conclusion

Accessing the metaverse requires a combination of specialized hardware and software. As the technology continues to evolve, we can expect to see new and innovative ways to access this virtual world. Whether you're interested in VR, AR, or other metaverse experiences, understanding the hardware and software requirements is essential for getting started. With the right hardware and software, you can unlock a world of immersive experiences and endless possibilities.

How does the metaverse work?

The technology underpinning the metaverse is cobbled together from other technologies, including virtual reality, blockchain and Web 3 (along with more mature programming tech that underpins the internet). Blockchain is a way of storing chunks of data in “blocks,” which are linked together into a chain based on relevance. Blockchain databases provide a way to share data while guaranteeing fidelity and security, which is why they are such a critical component of cryptocurrency. Blockchain provides the building blocks for Web 3, the newest iteration of the internet that provides the framework for extended reality. Last, virtual reality builds on these technologies to simulate a real-world experience. Most VR software is based on a “virtual world generator,” which is made using a software development kit from a specific VR headset vendor. This kit provides the basic programs, drivers, data and graphic-rendering libraries.

How to make money in the metaverse

As with most technology, the crucial question is how to monetize the experience. The metaverse offers most of the options available in the real world, plus a few that are only available virtually.

Buying and selling virtual land

Just like people are snapping up land in the real world, investors are buying up digital spaces, including “locations.” Buying virtual real estate requires using virtual currency, aka cryptocurrency, to buy directly from a virtual developer. Currently, the two most popular platforms are The Sandbox and Decentraland, each of which has its own currency (SAND and MANA, respectively).

Trading crypto

You can make money trading cryptocurrencies, similar to how you can make money investing in stocksit requires upfront (real) capital and a high tolerance for risk.

Trading NFTs

Non-fungible tokens are a digital security stored in a blockchain that represents a real asset, usually music, art (especially popular memes), in-game items and videos. Creating, buying and selling NFTs can be a lucrative business if you can predict what will be popular. Keep in mind that NFTs are potentially bad for the environment.

Selling real-world goods virtually

Virtual storefronts for real products are already live for some stores, like the aforementioned Walmart. While there are currently a lot of kinks, the goal is to eventually provide a virtual experience that is better than a real one. Virtual purchases will deliver real goods.

Other ways to make money

New financial opportunities are popping up as the technology evolves. Some possibilities include hosting metaverse events, selling virtual items like digital clothing or hairstyles for avatars, selling metaverse-specific services and trading metaverse tokens.

Examples of the metaverse

The metaverse is already all around you if you know where to look for it. Second Life, the popular computer game that simulates real life, is a natural fit for the metaverse and is quickly gaining popularity. Other games, including those mainly popular with the younger generation, like Fortnite, Minecraft and Roblox, are also big in the virtual sphere; by some estimates, nearly 100 million people log on to these games daily. First-person shooter and quest games become even more realistic and immersive in the metaverse.

Meta Horizons is like Facebook on steroids. The social platform is aiming to be the one-stop shop for digital socializing, communications and living.

It doesn’t have to be that intense though, adds Madsen. Even using a stargazing app or a voice-changing filter on your phone is engaging with the metaverse in a small way. “We all do it a lot more than we realize,” he says. “Anytime you’re using a virtual enhancement in your life, it’s a small part of extended reality or the metaverse.”

Is the metaverse safe?

All the safety concerns that exist about the internet are magnified by virtual realitythe more real the environment, the more real-feeling the scamsalong with some new ones particular to the metaverse, says Madsen.

Privacy

Blockchain technology is built to be a far more secure and private way to share information, but every tech has its flaws. In addition, laws regarding digital privacy rights are in flux, and there are many questions about the legality of data privacy in the metaverse.

Accessibility

Human bodies aren’t equipped on their own to access the metaverse, as it requires hardware, software and knowledgeall of which can be very expensive for individuals to get. In addition, some countries or regions would need to install expensive and complicated infrastructure to enhance data storage and data processing speeds. This could create a volatile system of technological haves and have-nots.

Health

Virtual reality has a powerful effect on the brain’s behavior, and this raises real-world concerns about physical and mental health, says Madsen. There are the obvious risks of physically injuring yourself from tripping or falling, but people are also reporting headaches, vertigo, muscle soreness and vision issues. Plus, people who are immersed in digital worlds often are doing so at the expense of exercising, breathing fresh air and socializing physically.

The more subtle health risks are mental. Because VR provides a much more realistic experience than watching something on a computer screen, the emotional and mental impacts are more intense. Watching a horror movie in VR, say, could cause real trauma, Madsen says. Not to mention that all the downsides of the current internet are magnified in VR, like violent pornography, the black market, sex trafficking and criminal activities.

When will the metaverse come out?

The metaverse already exists in theory and in many practical ways, but expect the technology to explode over the next five to ten years, predicts Madsen. Wearables, like VR headsets, will become comfortable, portable and more powerful. Software will become more realistic, heading toward “fully immersive” experiences.

This technology will have huge impacts on how people work (physical proximity will be a much smaller priority, but people may be required to be on the clock, around the clock), how people play (games won’t be limited by physical constraints like gravity), how people socialize (being present as a hologram at a birthday party would be much better than a video chat) and, most important, how we consume information. If we live in a “post-truth” society now, imagine what it will be like when lies are even more realistic and believable and deepfakes aren’t just 2D.

Your next favorite story won’t be written by AI – but it could be someday

Haoran Chu
Assistant Professor of Communications, University of Florida

Sixiao Liu
Assistant Professor of Population Health Sciences, University of Central Florida

Stories define people – they shape our relationships, cultures and societies. Unlike other skills replaced by technology, storytelling has remained uniquely human, setting people apart from machines. But now, even storytelling is being challenged. Artificial intelligence, powered by vast datasets, can generate stories that sometimes rival, or even surpass, those written by humans.

#ai #contentcreation #stories #generativeai #technology

Creative professionals have been among the first to feel the threat of AI. Last year, Hollywood screenwriters protested, demanding – and winning – protections against AI replacing their jobs. As University professors, we’ve seen student work that seems suspiciously AI-generated, which can be frustrating.

Beyond the threat to livelihoods, AI’s ability to craft compelling, humanlike stories also poses a societal risk: the spread of misinformation. Fake news, which once required significant effort, can nOW be produced with ease. This is especially concerning because decades of research have shown that people are often more influenced by stories than by explicit arguments and entreaties.

We set out to study how well AI-written stories stack up against those by human storytellers. We found that AI storytelling is impressive, but professional writers needn’t worry – at least not yet.

The power of stories
How do stories influence people? Their power often lies in transportation – the feeling of being transported to and fully immersed in an imagined world. You’ve likely experienced this while losing yourself in the wizarding world of Harry Potter or 19th-century English society in “Pride and Prejudice.” This kind of immersion lets you experience new places and understand others’ perspectives, often influencing how you view your own life afterward.

When you’re transported by a story, you not only learn by observing, but your skepticism is also suspended. You’re so engrossed in the storyline that you let your guard down, allowing the story to influence you without triggering skepticism in it or the feeling of being manipulated.

Given the power of stories, can AI tell a good one? This question matters not only to those in creative industries but to everyone. A good story can change lives, as evidenced by mythical and nationalist narratives that have influenced wars and peace.

A woman reads from a book to a small audience in a bookstore
Storytelling can be powerfully influential – especially if people sense the human behind the words.

Studying whether AI can tell compelling stories also helps researchers like us understand what makes narratives effective. Unlike human writers, AI provides a controlled way to experiment with storytelling techniques.

Head-to-head results
In our experiments, we explored whether AI could tell compelling stories. We used descriptions from published studies to prompt ChatGPT to generate three narratives, then asked over 2,000 participants to read and rate their engagement with these stories. We labeled half as AI-written and half as human-written.

Our results were mixed. In three experiments, participants found human-written stories to be generally more “transporting” than AI-generated ones, regardless of how the source was labeled. However, they were not more likely to raise questions about AI-generated stories. In multiple cases, they even challenged them less than human-written ones. The one clear finding was that labeling a story as AI-written made it less appealing to participants and led to more skepticism, no matter the actual author.

Why is this the case? Linguistic analysis of the stories showed that AI-generated stories tended to have longer paragraphs and sentences, while human writers showed more stylistic diversity. AI writes coherently, with strong links between sentences and ideas, but human writers vary more, creating a richer experience. This also points to the possibility that prompting AI models to write in more diverse tones and styles may improve their storytelling.

These findings provide an early look at AI’s potential for storytelling. We also looked at research in storytelling, psychology and philosophy to understand what makes a good story.

We believe four things make stories engaging: good writing, believability, creativity and lived experience. AI is great at writing fluently and making stories believable. But creativity and real-life experiences are where AI falls short. Creativity means coming up with new ideas, while AI is designed to predict the most likely outcome. And although AI can sound human, it lacks the real-life experiences that often make stories truly compelling.

Closing in?
It’s too early to come to a definitive conclusion about whether AI can eventually be used for high-quality storytelling. AI is good at writing fluently and coherently, and its creativity may rival that of average writers. However, AI’s strength lies in predictability. Its algorithms are designed to generate the most likely outcome based on data, which can make its stories appealing in a familiar way. This is similar to the concept of beauty in averageness, the documented preference people have for composite images that represent the average face of a population. This predictability, though limiting true creativity, can still resonate with audiences.

For now, screenwriters and novelists aren’t at risk of losing their jobs. AI can tell stories, but they aren’t quite on par with the best human storytellers. Still, as AI continues to evolve, we may see more compelling stories generated by machines, which could pose serious challenges, especially when they’re used to spread misinformation.

Article printed under Creative Commons license.

This 81-year-old 'biohacker' spends $70,000 a year trying to reverse aging

Kenneth Scott travels internationally for experimental treatments, doesn't use soap, and spends hundreds of thousands of dollars on his quest for immortality

Every few months, the internet explodes with news about Bryan Johnson – the tech entrepreneur who once had his son’s plasma infused with his blood in an attempt to extend his own life.

#biohacker #longevity #technology #aging

Johnson’s endeavors to prolong his life and reverse aging through any means possible have been widely met with both intrigue and ridicule. He is not, however, the only person pursuing a prolonged youth.

“When your heart stops beating, you’re guilty of mass cellular genocide,” Kenneth Scott, an 81-year-old biotech investor and real estate developer, told Quartz. “Our culture has the mentality that we were born to die. From childhood, we were taught that we’re going to die. But I suggest that that culture is out of date.”

Like Johnson, Scott is invested in reversing his age. Scott argues that it’s not enough to slow the aging process: He wants to be immortal and is part of an anti-aging movement that has spawned a litany of conferences and experimental treatments. Its adherents often travel internationally, accessing medical treatments that are not approved by the FDA or administered by doctors in the United States.

The octogenarian asserts that he can dance like he did at 18, has youthful skin, and when tested, his biological age reads as 18 years old. Scott estimates that he and his wife spend $70,000 annually on personal treatments to try to reverse aging, on top of the estimated $500,000 to $750,000 he has invested in biotechnology companies that study anti-aging technology.

Robinhood just got on the election betting bandwagon

Robinhood has jumped into election trading just a week before the race between Donald Trump and Kamala Harris draws to a close

You can now trade a Kamala Harris or Donald Trump contract on the trading platform Robinhood (HOOD
+3.12%
). With just a week remaining before the election, this program, which launched on Monday, enables users to bet on which candidate they believe will win, adding a new way to engage with the political process through the platform.

#election #betting #robinhood

Volkswagen will close factories and lay off thousands of workers

It last closed down a plant in 1988. Now Europe's biggest carmaker wants to close down three more

Volkswagen plans to close at least three plants in Germany and lay off thousands of workers, marking its first closure in decades as it looks to cut costs, according to the head of its works council.

#vw #automotive #europe #layoffs

In addition to those closures, all additional Volkswagen plants in Germany will be downsized, Daniela Cavallo told employees on Monday in remarks reported by the Deutsche Presse-Agentur (DPA). Mass layoffs are being planned, with entire departments facing closure or relocation.

Volkswagen employs roughly 300,00 people in Germany, including tens of thousands spread across its headquarters and main plant in Wolfsburg. Europe’s biggest carmaker also operates another nine factories in Germany.

“All German VW plants are affected by these plans. None of them are safe,” Cavallo said during the event, according to the DPA.

Until now, Volkswagen has never closed a plant in its native country. The last time it closed any factory was in 1988, when it shut down its location in Pennsylvania’s Westmoreland County. In July, it weighed closing an Audi factory in Brussels, as demand for high-end electric cars sank.

Here's Everything You Need to Know About the Future of Technology

From self-driving cars to space travel to NFTs, we answer your questions about where technology is heading.

#future #technology

I miss one topic here and thats security. In not a distant future we will need to abandon almlst every encryption tech we use today because it will be obselete.
This is gonna be mega disruptive.

It could be. That is something that many are concerned about especially with the prospect of quantum. I havent followed that in a while to see the progress.

But the logical solution, in my mind, is to build quantum security.

Yeah but just imagine how many security solutions that needs to change. Its will be a chaotic time.
Basically every security solution used today can be broken by quantum.

The Future of Technology: What's Coming Next?

As technology continues to evolve at an incredible pace, it's essential to stay ahead of the curve and understand what's coming next. From self-driving cars to the metaverse, robots, non-fungible tokens (NFTs), and space travel, we'll explore the latest trends and innovations that will shape our future.

Self-Driving Cars

While we're not quite there yet, self-driving cars are becoming a reality. In China, self-driving taxis are already transporting passengers, and the country plans to increase the sales of Level 4 vehicles to 20% of the total by 2030. This means that by the end of the decade, one-fifth of all cars sold in China will be capable of driving themselves.

However, privacy concerns may stymie the promise of kicking back on your commute, as autonomous vehicles will require access to personal data to function effectively. Additionally, the development of self-driving cars has raised questions about liability in the event of an accident, and regulators are still working to establish clear guidelines.

The Metaverse

The metaverse is a 3D virtual space that can be accessed through virtual reality goggles, adding elements of the digital on top of our day-to-day lives. While Meta is leading the charge, it's not the only company working on the metaverse, and its future is uncertain.

The metaverse has the potential to revolutionize the way we interact with each other and with technology, but it also raises concerns about addiction, social isolation, and the blurring of lines between the physical and digital worlds.

Robots and Artificial Intelligence

By 2035, one in three jobs could be automated by real robots, predicts PwC. While robots are traditionally applied to repetitive processes, white-collar roles are also affected, particularly those focused on data sorting. For example, robots are already being used in customer service, where they can handle routine inquiries and free up human customer service representatives to focus on more complex issues. However, jobs where workers are less likely to be replaced by robots include those in health care, where human empathy and judgment are essential.

NFTs

Non-fungible tokens (NFTs) are one-of-a-kind digital objects that can't be exchanged for each other or copied. While they've captured the zeitgeist, NFTs have faced criticism for being stolen or using images that don't legally belong to the artists behind them. The rise of NFTs has also raised questions about the value and ownership of digital art, and whether it's possible to truly own something that exists only in the digital realm.

Space Travel

Fifty years ago, astronauts traveled to space in rockets designed, built, and maintained by NASA. Today, billionaires are enjoying journeys into low orbit on rockets they paid for. As private enterprise learns more about putting rockets and satellites into space, they're able to help NASA on its missions. The hope is that people will follow, possibly by 2025 or realistically by 2030. However, space travel is still a highly regulated and expensive endeavor, and it's unclear whether it will become more accessible to the General public in the near future.

Conclusion

The future of technology is exciting and uncertain. As we navigate the next decade, it's essential to stay informed and adapt to the changing landscape. Whether it's self-driving cars, the metaverse, robots, NFTs, or space travel, the possibilities are endless, and the future is full of promise. However, it's also important to consider the potential challenges and risks associated with these emerging technologies, and to work towards creating a future that is equitable, sustainable, and beneficial to all.

Meta reportedly building AI search engine to cut reliance on Google, Bing

The AI search engine segment is heating up with ChatGPT-maker OpenAI, Google and Microsoft all vying for dominance in the rapidly evolving market.

Meta Platforms is working on an artificial intelligence-based search engine as it looks to reduce dependence on Alphabet’s Google and Microsoft’s Bing, the Information reported Monday.

#meta #search #technology #newsonleo

The AI search engine segment is heating up with ChatGPT-maker OpenAI, Google and Microsoft all vying for dominance in the rapidly evolving market.

Meta’s web crawler will provide conversational answers to users about current events on Meta AI, the company’s chatbot on WhatsApp, Instagram and Facebook, according to the report, which cited a person involved with the strategy.

The Facebook-owner currently relies on Google and Bing search engines to give users answers on news, stocks and sports.

Meta did not immediately respond to a Reuters request for comment.

Google is aggressively integrating its latest and most powerful AI model, Gemini, into core products like Search, aiming to deliver more conversational and intuitive search experiences.

OpenAI relies on its largest investor, Microsoft, for web access to answer topical queries, using its Bing search engine.

Scraping web data to train AI models and search engines, however, has raised concerns about copyright infringement and fair compensation for content creators.

Meta said last week its AI chatbot will use Reuters content to answer user questions in real time about news and current events.

Oh this will be good. Google vs Meta. Dont even know if I want any of them to "win"?

Realistic AI photos reveal what typical cheaters look like

Are you a bald man with a big nose in your 40s? You're more likely to cheat on your partner, according to an AI-generated profile of what a typical philanderer looks like.

Bald-faced liars are more likely to be adulterers as well.

Are you a bald man with a big nose in your 40s? You’re more likely to cheat on your partner, according to an AI-generated profile of what a typical philanderer looks like.

#ai #Photos #deepfakes #technology

“We’ve shed light on the physical traits associated with those prone to cheating,” declared Rosie Maskel, a senior marketing executive at the online casino MrQ, who conducted the scandalous study, Kennedy News reported.

The digital betting bazaar reportedly surveyed 2,000 Brits — many of whom had been betrayed in the boudoir — to deduce what attributes cheaters had in common.

They then fed the results to an AI-powered image generator to create a “photo-fit” depiction of the average cheater.

The resultant AI-rtist’s depiction showed a male in their 40s with blue-gray eyes, sparse or no hair, and frown lines. Throw in small lips and larger schnoz and you get the poster child for someone who sleeps around on their spouse, per the study.

Now we just train the AI to have the same baises as we do. Why would a apperance have a higher cheating rate than another?

Blackwell 3D Partners With Stayzation Lifestyle to Pioneer 3D Concrete Printing in Luxury Villa Developments

Blackwell 3D Partners With Stayzation Lifestyle to Pioneer 3D Concrete Printing in Luxury Villa Developments

DUBAI, UAE, Oct. 28, 2024 (GLOBE NEWSWIRE) -- Blackwell 3D Construction Corp. (OTC: BDCC) ("Blackwell 3D” or the "Company"), an innovative 3D house printing technology company, is pleased to announce a groundbreaking partnership with Stayzation Lifestyle, a prominent player in the luxury vacation real estate market throughout India. The collaboration aims to explore the potential of 3D concrete printing for high-end villa developments, starting with Stayzation Lifestyle’s land parcel in Lonavala.

#blackwell3d #3dprinting #construction

In the first phase of this partnership, Blackwell 3D will provide comprehensive consultation services to Stayzation Lifestyle. This phase will include:

  • Feasibility Study: A thorough assessment of the viability of 3D printing technology for Stayzation’s Lonavala property, evaluating factors such as terrain, project scope, and technical requirements.
  • Expert Guidance: Tailored advice on regulatory, technical, and operational considerations related to 3D concrete printing, ensuring that Stayzation meets all industry standards and compliance benchmarks.
  • Technology Evaluation: An analysis of the design flexibility, cost-efficiency, and environmental sustainability that 3D printing can offer, particularly in the context of high-end villa construction.

“We are excited to bring our 3D concrete printing technology to this collaboration with Stayzation Lifestyle. The potential to transform how luxury villas are constructed, from reducing costs to enhancing design possibilities, is immense,” said Mohammedsaif Zaveri, CEO of Blackwell 3D.

Upon completion of the initial consultation and feasibility study, the partnership will explore additional opportunities to enhance their cooperation. Future initiatives may include:

  • Equipment Leasing: Stayzation Lifestyle may lease 3D printing equipment from Blackwell 3D for use in its ongoing and future projects.
  • End-to-End Construction Management: Blackwell 3D may be contracted to oversee the complete 3D construction process for select villa developments, ensuring seamless execution from design to completion.
  • Expansion into Joint Ventures: Both companies are open to pursuing joint ventures on other properties and locations within India and potentially in other markets where Stayzation Lifestyle operates or plans to expand.

“With Blackwell 3D’s printing technology, we are eager to innovate and bring a new level of sophistication, sustainability, and efficiency to our villa developments. This partnership marks an exciting step in revolutionizing luxury real estate,” declared Ajaz Khan, CEO of Stayzation Lifestyle.

This partnership positions both companies at the forefront of technological innovation in the real estate industry, with the potential to reshape the way high-end properties are designed and built.

Understanding Vector Databases: What They Are and When to Use Them

As artificial intelligence (AI) and machine learning (ML) continue to advance, the need for efficient data storage and retrieval has become more critical than ever. Vector databases have emerged as a powerful solution for managing high-dimensional data used in various AI applications. This blog post will explore what vector databases are, their different types, the pros and cons of each, and how they work.

#ai #database #vector #technology #machinelearning

Background Information
A vector database is a specialized type of database designed to store, index, and retrieve vector embeddings efficiently. Vector embeddings are numerical representations of data (text, images, audio) that capture their semantic meaning. These databases are crucial for applications involving similarity search, recommendation systems, and natural language processing.

What is a Vector Database?

Vector databases store and manage high-dimensional vectors, allowing for efficient similarity searches and retrieval. They use advanced algorithms to index and query these vectors, enabling rapid and accurate data retrieval based on semantic similarities.

Key Features of Vector Databases

Scalability: Ability to handle large datasets and scale across multiple nodes.
Performance: Fast indexing and query response times.
APIs and SDKs: Comprehensive API suites for integration with various programming languages.
Security: Features like role-based access control and data encryption.
Different Types of Vector Databases

Pinecone
Features: Fully managed service, real-time data ingestion, low-latency search.
Pros: High performance, easy to use.
Cons: Not open-source, cannot run locally.
Use Cases: Large-scale ML applications, real-time recommendation systems.

  1. Milvus

Features: Open-source, GPU support, integrates with ML frameworks like PyTorch and TensorFlow.
Pros: High performance, strong community support.
Cons: Requires more setup and maintenance.
Use Cases: Similarity search, image/video analysis, NLP.

  1. Weaviate

Features: Open-source, supports both vectors and data objects, GraphQL-based API.
Pros: Highly scalable, flexible data management.
Cons: Performance can vary based on configuration.
Use Cases: Semantic search, cybersecurity threat analysis, recommendation engines.

  1. Qdrant

Features: Open-source, JSON payloads, filtering support.
Pros: Versatile, suitable for various data types.
Cons: Newer in the market, fewer integrations.
Use Cases: Neural network-based matching, faceted search.

  1. Chroma

Features: Open-source, feature-rich with queries and filtering.
Pros: Easy to use, suitable for LLM applications.
Cons: Limited to certain use cases.
Use Cases: LLM applications, knowledge management.
How Vector Databases Work

Data Ingestion: Importing data into the database, converting it into vector embeddings.
Indexing: Creating indices to enable efficient querying of vectors.
Querying: Using various distance metrics (e.g., cosine similarity, Euclidean distance) to find similar vectors.
Storage and Retrieval: Managing data across multiple nodes for scalability and performance.

Technical Challenges

Scalability: Handling ever-growing datasets and ensuring efficient querying.
Performance: Maintaining low latency and high throughput for real-time applications.
Integration: Providing seamless integration with various ML frameworks and applications.
Security: Ensuring data privacy and secure access controls.
Use Cases
Recommendation Systems: Personalizing content based on user preferences and behavior.
Similarity Search: Finding similar items in large datasets, such as images or documents.
Natural Language Processing: Enhancing search engines and chatbots with semantic understanding.
Cybersecurity: Detecting anomalies and potential threats through pattern recognition.

Conclusion
Vector databases are essential for modern AI applications that require efficient management and retrieval of high-dimensional data. Understanding the strengths and limitations of each type can help you choose the right database for your specific needs. By leveraging the power of vector databases, you can enhance the performance and scalability of your AI and ML applications.

  • Vector databases are a scalable solution for businesses expanding their datasets.
  • Business benefits include real-time processing and improved search accuracy.
  • Vector databases will be crucial for machine learning and AI applications.

By adding content to a vector database, you're not just storing data – you're fueling a system that learns and evolves with your business. The beauty of vector databases extends beyond machine learning, though. They unlock a world of possibilities, from supercharging search capabilities to enabling hyper-personalized customer experiences.

The algorithms enabled by vector databases give AI programs the ability to find patterns in content. These patterns are a foundation of the contextual learning you’ve experienced if you’ve interacted with an AI system. With more quality content over time, AI programs are able to find hidden correlations, make predictions, and generate or summarize content in remarkable ways.

Israel says it will field Iron Beam air-defense lasers in a year

A contract with manufacturers Rafael and Elbit will accelerate the laser-defense system, which is meant to relieve pressure from traditional interceptors.

#israel #ironbeam #defense #middleeast

What is Iron Beam?

Iron Beam is a laser-based air defense system designed to intercept and disable incoming threats such as rockets, missiles, drones, and cruise missiles. The system uses a high-powered laser cannon to generate a intense beam of energy that can damage or destroy targets, even at long range.

How does Iron Beam work?

Iron Beam operates by using a laser cannon to generate a high-powered beam of energy that is directed at the incoming threat. The laser beam is designed to be highly focused, allowing it to penetrate the target's armor and cause significant damage. The system can be programmed to follow low-flying targets, making it effective against a range of air threats.

Advantages of Iron Beam

Iron Beam offers several advantages over traditional air defense systems. One of its main benefits is its relatively low cost, as the energy required to launch an Iron Beam interceptor is similar to the energy required to launch a traditional interceptor. This makes Iron Beam an attractive option for countries looking to improve their air defense capabilities without breaking the bank.

Another advantage of Iron Beam is its ability to operate in a variety of environments, including cloudy or sandstorm conditions. This makes it a more versatile option than traditional air defense systems, which can be less effective in these conditions.

Limitations of Iron Beam

While Iron Beam is a highly effective air defense system, it does have some limitations. One of its main limitations is its ability to deal with large rocket barrages, which can be difficult to intercept using a laser-based system. Additionally, Iron Beam's effectiveness can be impaired by weather conditions such as clouds, rain, or sandstorms, which can reduce the accuracy of the laser beam.

Integration with existing air defense systems

Iron Beam is designed to be integrated into existing air defense systems, including the Israeli air defense network. The system will be used in conjunction with other air defense systems, such as the Iron Dome batteries, to provide a more comprehensive and effective defense against a range of air threats.

Benefits of integration

The integration of Iron Beam into the Israeli air defense network is expected to provide several benefits. One of the main advantages is its ability to provide an additional layer of defense against a range of air threats, including rockets, missiles, drones, and cruise missiles.

Another benefit is its ability to provide a more affordable option for air defense, as the energy required to launch an Iron Beam interceptor is similar to the energy required to launch a traditional interceptor. This makes Iron Beam an attractive option for countries looking to improve their air defense capabilities without breaking the bank.

Comparison to other air defense systems

Iron Beam is compared to other air defense systems, such as the Iron Dome system, which uses traditional interceptors to defend against incoming threats. While Iron Dome is effective against small rockets and mortar shells, it can be expensive to deploy and maintain. Iron Beam, on the other hand, offers a more affordable option for air defense, as the energy required to launch an interceptor is similar to the energy required to launch a traditional interceptor.

Timeline for integration

The integration of Iron Beam into the Israeli air defense network is expected to take place within the next year. The system is currently undergoing operational development and adaptation to the battlefield, and is expected to be fully integrated into the air defense network by the end of 2024.

Budget allocation

The budget for the integration of Iron Beam into the Israeli air defense network is estimated to be approximately NIS 2 billion (approximately $536 million). The majority of this budget will be allocated to Rafael, the main developer of the Iron Beam system, while a smaller portion will be allocated to Elbit, the supplier of the laser cannon.

Overall, the integration of Iron Beam into the Israeli air defense network represents a significant milestone in the country's efforts to bolster its defenses and counter the growing threat from airborne adversaries. With its advanced laser technology and affordable design, Iron Beam is expected to provide a valuable addition to the Israeli air defense network, helping to protect the country from a range of air threats.

JPMorgan begins suing customers who allegedly stole thousands of dollars in 'infinite money glitch'

JPMorgan is investigating thousands of cases related to the glitch, which highlights the risk that social media can amplify vulnerabilities found at a bank.

JPMorgan Chase has begun suing customers who allegedly stole thousands of dollars from ATMs by taking advantage of a technical glitch that allowed them to withdraw funds before a check bounced.

#jpmorgan #chase #bank #newsonleo

The bank on Monday filed lawsuits in at least three federal courts, taking aim at some of the people who withdrew the highest amounts in the so-called infinite money glitch that went viral on TikTok and other social media platforms in late August.

A Houston case involves a man who owes JPMorgan $290,939.47 after an unidentified accomplice deposited a counterfeit $335,000 check at an ATM, according to the bank.

"On August 29, 2024, a masked man deposited a check in Defendant's Chase bank account in the amount of $335,000," the bank said in the Texas filing. "After the check was deposited, Defendant began withdrawing the vast majority of the ill-gotten funds."

Trump accuses Taiwan of stealing U.S. chip business, here's what the election could bring

Trump said tariffs should be put on Taiwan's chips during an appearance on the Joe Rogan Experience podcast.

Former president Donald Trump reiterated his frustration with Taiwan over the weekend when he appeared on the Joe Rogan Experience podcast and accused the country of stealing America's chip industry.

#trump #taiwan #chips #joerogan #tariffs

Trump criticized the U.S. CHIPS Act and said he would implement tariffs on chips from Taiwan if elected president. Such tariffs would impact the global leader in chip building, Taiwan Semiconductor Manufacturing Company, whose customers include companies like Nvidia and Apple.

Shares of Taiwan Semiconductor closed down 4.3% on Monday.

"You know, Taiwan, they stole our chip business... and they want protection," Trump said during the appearance. The podcast was published on Saturday evening.

Every hyperscaler, like Amazon, Google and Microsoft, working on their own in-house chip is fabbing with the Taiwanese company. UBS analysts estimate over 90 percent of the world's advanced chips are manufactured by TSMC. Intel and Samsung are among the companies trying to compete but have faced a series of setbacks.

Norway to Buy US Air Defense Missiles for More Than $360 Million

Norway has agreed with U.S. authorities to buy AIM-120C-8 AMRAAM air defense missiles for more than 4 billion Norwegian crowns ($362.91 million), the Norwegian military said Monday.

"With more and newer missiles, the Norwegian Armed Forces will have a better ability to protect Norway against air attacks," Norway's Defense Minister Bjoern Arild Gram said in a statement from the Norwegian Defense Material Agency.

#defense #norway #missiles #technology #newsonleo

Have a feeling that the US military complex is going to work overtime in the near future. Europe is preparing for war.

The warmongers are fully in control. That is what runs the US State Department.

People like Blinken should be tried for treason.

Depends on who you ask. I believe the US will benefit greatly when they can sell military equipment.

The Military Industrial Complex are part of the warmongers in my definition. I dont separate them. Those financially benefitting are just as guilty.

I dont blame the regular workers in the factories. They only do whatever is required of them.

The missiles are primarily intended for Norway's ground-based air defense system, but can also be included in the weapons inventory of F-35A fighter aircraft, the agency said.

The procurement was among the largest single procurements of weapons ever made for the Norwegian Armed Forces, according to the agency.

Norway, which is a member of NATO and shares a border with Russia, has vowed to ramp up defense spending following Moscow's full-scale invasion of Ukraine. ($1 = 11.0220 Norwegian crowns)

MIT’s new cancer therapy combines tumor destruction, chemo in single implant

The combination of phototherapy and chemotherapy could offer a more effective way to fight aggressive tumors.

Patients with late-stage cancer often have to endure multiple rounds of different types of treatment, which can cause unwanted side effects and may not always help.

In hopes of expanding the treatment options for those patients, MIT researchers have designed tiny particles that can be implanted at a tumor site. These particles deliver two types of therapy: heat and chemotherapy.

#mit #cancer #technology #healthcare

This approach could avoid the side effects that often occur when chemotherapy is given intravenously, and the synergistic effect of the two therapies may extend the patient’s lifespan longer than giving one treatment at a time.

Dual-action cancer therapy

Patients with advanced tumors usually undergo a combination of treatments, including chemotherapy, surgery, and radiation.

Phototherapy is a newer treatment that involves implanting or injecting particles that are heated with an external laser. This raises the particles’ temperature enough to kill nearby tumor cells without damaging other tissue.

Current approaches to phototherapy in clinical trials use gold nanoparticles, which emit heat when exposed to near-infrared light.

The MIT team wanted to devise a way to deliver phototherapy and chemotherapy together, which they thought could make the treatment process easier on the patient and might also have synergistic effects.

They decided to use an inorganic material called molybdenum sulfide as the phototherapeutic agent. This material converts laser light to heat very efficiently, so low-powered lasers can be used.

To create a microparticle that could deliver both cancer treatments, the researchers combined molybdenum disulfide nanosheets with either doxorubicin, a hydrophilic drug, or violacein, a hydrophobic drug.

US develops portable device that extracts water from air using 50% less energy

The device uses special materials that change temperature when stretched or compressed, allowing it to cool the air and condense water vapor with minimal energy use.

Several researchers across the globe are conducting study to make drinking water readily available, even in the driest of climates. A research team at The Ohio State University has made a significant stride in this regard.

They have developed a new prototype water harvester that promises to be simpler, more efficient, and more portable than traditional methods of pulling drinking water from the air.

#water #technology

According to the researchers, built using temperature-sensitive materials, the nickel titanium-based dehumidifier is so compact that it can be carried in a backpack and works by extracting water directly from the air.

Moreover, this new device uses only half the energy than that of used by a standard desiccant wheel dehumidifier while extracting the same amount of water from air.

Limitations to existing technologies

Interestingly, the process of extracting water from the air is not new. Several developments have occurred in this sphere. However, existing technologies often rely on bulky and energy-intensive methods.

“Whereas many existing water harvesting technologies are large, energy-intensive and slow, this team’s device is unique due to elastocaloric cooling,” added the press release.

This innovative approach employs special materials that change temperature when stretched or compressed. These materials allow the device to cool the air and condense water vapor with minimal energy consumption.

“This design is what also allowed their prototype to become portable enough to fit inside a backpack,” mentioned the university’s press release.

3D-printing your tooth: How an Indian breakthrough has changed dentistry

Indian scientists have developed a new technique in 3D printing and dental restoration that is a step forward in dentistry.

Known as Photoinduced Radical Polymerisation (PRP), this light-activated chemical process is a sustainable and cost-effective alternative that holds great promise in 3D printing and dental fillings.

#3dprinting #dental #india #Technology

The Photoinduced Radical Polymerisation (PRP) process is a new method combining two advanced techniques to create stronger, longer-lasting materials.

It works by using light to trigger a reaction that bonds molecules together, forming a solid material without needing any heat. This process depends on a special ingredient called a "photoinitiator," which, when it absorbs light, sets off the reaction.

By skipping the need for heat, this technique is safer, more efficient, and environmentally friendly, making it useful in areas like dentistry, 3D printing, and more.

Vector Database vs. Vector Search
While vector database and vector search are similar terms, their main difference lies in the function and the process of each. A vector database is an entire data management solution, while vector search is a type of semantic search tool.

When you conduct a vector search, your query vector will be compared to a large collection of vectors in an attempt to find similarities. This action is sometimes dubbed a similarity search. Unlike traditional databases, the goal here is to find similar matches in a short amount of time. Your database is where you conduct your vector search. Using indexing, you'll enjoy a lightning fast similarity search to take the legwork out of analyzing your data.

Tesla’s Cybercab takes workers for a ride at Giga Texas

Tesla has put a Cybercab on display at its Gigafactory Texas, with recent coverage from the site even showing the two-seat, autonomous vehicle driving a few employees around the parking lot.

Over the weekend, multiple users shared photos and video footage showing the Cybercab at Giga Texas, as it was parked in front of the facility’s main entrance. Along with being parked, the Cybercab was seen giving a few people rides in the parking lot, as shared in a short video on TikTok from user anthonyacord.

#tesla #cybercab #gigatexas #autonomy #taxi #robotaxi #technology

During Tesla’s Q3 earnings call last week, Elon Musk said the company was aiming for a volume production of two million units per year with the Cybercab, expected to happen as soon as 2026. Tesla also unveiled a wireless charging system for the Cybercab that it says has an efficiency rating of “well above 90 percent,” along with offering charging as another autonomous feature.

You can also check out Teslarati’s first-hand coverage and first ride in the Cybercab below, taken at the October 10 We, Robot unveiling event in Southern California.

Tesla launches low-voltage connector standard to simplify EV transition

Tesla has officially launched a new standard for the low-voltage connections in electric vehicles (EVs), which the company says will reduce necessary connection types in most EVs from over 200 to just six.

In post on its blog on Monday, Tesla officially launched the Low-Voltage Connector Standard (LVCS), which is a group of six standardized EV connectors meant to simplify the manufacturing of EVs and help accelerate the world’s transition to sustainable energy. Tesla says the connectors were designed with power and signal requirements for more than 90 percent of typical connections, offering the ability to increase operational efficiency, reduce manufacturing costs, and increase the potential for manufacturing automation.

#tesla #charging #ev #technology

In addition, Tesla writes that the LVCS suite was designed upon the 48-volt architecture built into the Cybertruck, meeting certain requirements for spacing for 48V operation. The company notes that the 48V architecture requires just a quarter of the current to deliver the same amount of power as commonly-used 12V systems.

The company also says that the LVCS equipment is designed to enable reliable autonomous vehicles, featuring single-wire sealing, independent secondary locking mechanisms, and a smaller overall housing size.

Tesla explains its intentions behind the designs and the standard’s manufacturing efficiency potential as follows:

To accelerate the world’s transition to sustainable energy, we are simplifying the manufacturing process and electrical connectivity requirements for all our vehicles. This includes the implementation of our Low-Voltage Connector Standard (LVCS), which allows us to reduce the large number of connector types required to just 6.

These 6 device connectors are designed to meet the power and signal requirements for over 90% of typical electrical device applications. This standardization unlocks further operational efficiencies, cost reductions and manufacturing automation.

Vinod Khosla calls SB 1047 author 'clueless' and 'not qualified' to regulate the real dangers of AI

Vinod Khosla said the author of California's recently vetoed AI bill, SB 1047, was clueless about the real dangers of AI,

Vinod Khosla said the author of California’s recently vetoed AI bill, SB 1047, was clueless about the real dangers of AI, and not qualified to have an opinion on global national security issues. The comment about state Senator Scott Wiener was made during an on-stage interview at TechCrunch Disrupt 2024.

#california #technology #ai #vinodkhosla #regulation

“He’s clueless about the real dangers, which are national security issues,” said Khosla, referring to Senator Wiener, who represents San Francisco in California’s legislature. “I’m a huge supporter of of him when it comes to his efforts on housing and NIMBYism and stuff. So huge supporter on those issues because they are local issues. This is a global national security issue. He’s not qualified to have an opinion there.”

SB 1047 was a highly controversial AI bill that California’s legislature passed, but Governor Newsom vetoed in the face of opposition from Silicon Valley, Nancy Pelosi, and the United States Department of Commerce. The bill attempted to make AI laboratories liable for the most extreme dangers of their AI models, even if they were not the ones operating them in a dangerous way.

Wiz CEO explains why he turned down a $23 billion deal

Assaf Rappaport, the co-founder and CEO of cloud security startup Wiz, said that turning down a $23 billion offer from Google was “the toughest decision

Assaf Rappaport, the co-founder and CEO of cloud security startup Wiz, said that turning down a $23 billion offer from Google was “the toughest decision ever,” but justified it by saying the company can get even bigger and reach $100 billion because cloud security is the future.

#wiz #ceo #google #security #technology

“I think we did the right choice,” Rappaport said on Monday at the annual TechCrunch Disrupt conference.

“We believe it’s bigger, definitely bigger than endpoint, bigger than networks, so the opportunity to become a 100 plus billion dollar company is there. We believe that the company that is going to…own cloud security in the world is going to be a 100-plus billion dollar company,” he added. “I’m not sure it’s going to be Wiz, but if we do the right things, and we execute, I think it’s…in our hands.”

Even then, it was not an easy decision to make, as he had to think about Wiz’s investors, as well as its employees.

“I was super nervous,” he admitted. But it was he and his co-founders who made that call. “At a healthy company with a healthy relationship with investors, it’s always the founder’s decision.”

Wiz rejects Google’s $23 billion takeover in favor of IPO

Had the deal gone ahead, it would have been the largest acquisition ever made by Google.

Cybersecurity startup Wiz has turned down a $23 billion takeover bid from Google’s parent, Alphabet, breaking off what would have been the largest acquisition in the search giant’s history. In an internal memo seen by CNBC, Wiz co-founder Assaf Rappaport said the company would instead pursue an initial public offering.

#google #wiz #ipo #google #alphabet

“Saying no to such humbling offers is tough,” Rappaport said in the memo sent to Wiz employees. Had the acquisition gone ahead, it would have doubled the $12 billion valuation that Wiz announced in May after the company raised $1 billion in private funding. According to the memo, Wiz will now focus on achieving $1 billion in annual recurring revenue alongside the IPO — goals the security firm had set before its talks with Google. Neither Wiz nor Alphabet have officially acknowledged that a deal was being discussed.

Wiz offers cloud-based security solutions for enterprise customers, an attractive target that could have placed Google in a better position to compete with industry leaders Microsoft and Amazon. Antitrust regulators have increasingly fixated on deals made by Big Tech in recent years, however, and according to CNBC, both antitrust and investor concerns were cited as reasons for Wiz abandoning the deal.

The Justice Department has already launched two ongoing antitrust lawsuits against Google over its search engine and digital advertising businesses. Google purchased two cybersecurity firms in 2022 — Siemplify and Mandiant — for $500 million and $5.4 billion, respectively, with the latter company best recognized for uncovering the SolarWinds hack.

Jeff Bezos is no longer relentlessly focused on customer satisfaction

The man who only believes in capitalism managed to do capitalism wrong.

The fallout from the non-endorsement of Kamala Harris at The Washington Post is here: more than 200,000 canceled subscriptions, NPR reports. This is about 8 percent of the paid subscriber base, and the number of cancellations is still growing.

#jeffbezos #washingtonpost #npr

To put that in perspective, in an Oct. 15th story about Post CEO Will Lewis’s strategy to get more paying subscribers, The New York Times reported that the Post had added 4,000 subscribers since the beginning of 2024 through September. Like, I am actually flabbergasted: that’s fifty times as many cancellations in one weekend as The Post earned in the better part of a year.

“This is obviously an effort by Jeff Bezos to curry favor with Donald Trump in the anticipation of his possible victory.”

Now, there have been multiple reports at this point — from NPR, The Columbia Journalism Review, and The Washington Post itself — that the call to stop endorsing candidates came from Jeff Bezos himself. The same day as Lewis’s bizarre announcement of The Post’s non-endorsement, executives from Bezos’s space company, Blue Origin, met with presidential candidate Donald Trump.

I suppose I should mention the various government contracts Bezos’s other businesses have — among them, Amazon’s $10 billion NSA contract and Blue Origin’s $3.4 billion NASA contract. Trump has previously targeted Bezos for The Washington Post’s reporting. A columnist who quit the Post over the decision, Robert Kagan, told CNN, “This is obviously an effort by Jeff Bezos to curry favor with Donald Trump in the anticipation of his possible victory.” Kagan pointed to the business contracts as motivation.

Gmail will now help you write an email on the web with AI

“Help me write” is coming to Gmail on the web.

Google is expanding “Help me write” to Gmail on the web, allowing users to whip up or tweak emails using Gemini AI. Just like on mobile, users will see a prompt to use the feature when opening a blank draft in Gmail.

#gmail #google #web #email #technology #gemini

Google’s “Help me write” feature is only available to users who subscribe to Google One AI Premium or have the Gemini add-on for Workspace. In addition to generating an email draft, “Help me write” can also provide suggestions on how to formalize, elaborate, or shorten a message.

Google is also adding a shortcut for the “polish” option available within its “Help me write” toolset, which will appear on drafts with over 12 words. For Gmail on the web, users can click the shortcut or type Ctrl + H to quickly refine an email.

On mobile, the option will replace the existing “Refine my draft” shortcut. Instead of swiping to see options to polish, formalize, elaborate, or shorten an email, the app will automatically refine the message when the “polish” shortcut is swiped. Users can then tweak the message further with Google’s other AI editing tools.

Google will gradually roll out “Help me write” on the web, along with its new “polish” shortcut starting today.

Fortnite is streamlining its many battle passes

It’s going to get easier to level up your battle pass no matter what mode you’re playing.

Fortnite is bigger than a battle royale now, and to help make progressing through the various available battle passes a little bit more streamlined, Epic Games is going to allow XP from any experience apply to any of those passes.

#fortnight #technology #gaming

Currently, XP you earn across various modes in Fortnite doesn’t help you progress through the special passes for Lego Fortnite and the music-themed Fortnite Festival. Instead, you progress by earning “studs” and “festival points,” respectively, by playing those modes.

Starting November 2nd, however, Epic will begin the migration to let you progress using XP. As Epic details in a blog post, the change will first apply the upcoming Music Pass (renamed from Festival Pass), meaning festival points will go away. On December 1st, the change to XP will go into effect for the “Brick or Treat” Lego Pass and studs will be retired. That same day, the “battle stars” that you earn with XP in Fortnite’s main battle pass will also be removed.

Epic is continuing to make Fortnite more of a platform with lots of Epic- and creator-made experiences rather than just one main mode. Letting players use XP to all of its various players helps address some growing pains with that transition — especially now that Epic has had about a year since the launch of Lego Fortnite, Rocket Racing, and Fortnite Festival to figure out what players want.

Open-source AI must reveal its training data, per new OSI definition

Meta’s Llama contends with the new Open Source Initiative definition of truly “open” AI.

The Open Source Initiative (OSI) has released its official definition of “open” artificial intelligence, setting the stage for a clash with tech giants like Meta — whose models don’t fit the rules.

#opensource #ai #osi #technology #meta

OSI has long set the industry standard for what constitutes open-source software, but AI systems include elements that aren’t covered by conventional licenses, like model training data. Now, for an AI system to be considered truly open source, it must provide:

  • Access to details about the data used to train the AI so others can understand and re-create it
  • The complete code used to build and run the AI
  • The settings and weights from the training, which help the AI produce its results

This definition directly challenges Meta’s Llama, widely promoted as the largest open-source AI model. Llama is publicly available for download and use, but it has restrictions on commercial use (for applications with over 700 million users) and does not provide access to training data, causing it to fall short of OSI’s standards for unrestricted freedom to use, modify, and share.

Open Source Initiative

Why we need Open Source Artificial Intelligence (AI)

Open Source has demonstrated that massive benefits accrue to everyone after removing the barriers to learning, using, sharing and improving software systems. These benefits are the result of using licenses that adhere to the Open Source Definition. For AI, society needs at least the same essential freedoms of Open Source to enable AI developers, deployers and end users to enjoy those same benefits: autonomy, transparency, frictionless reuse and collaborative improvement.

What is Open Source AI

When we refer to a “system,” we are speaking both broadly about a fully functional structure and its discrete structural elements. To be considered Open Source, the requirements are the same, whether applied to a system, a model, weights and parameters, or other structural elements.

An Open Source AI is an AI system made available under terms and in a way that grant the freedoms1 to:

  • Use the system for any purpose and without having to ask for permission.
  • Study how the system works and inspect its components.
  • Modify the system for any purpose, including to change its output.
  • Share the system for others to use with or without modifications, for any purpose.

These freedoms apply both to a fully functional system and to discrete elements of a system. A precondition to exercising these freedoms is to have access to the preferred form to make modifications to the system.

Preferred form to make modifications to machine-learning systems

The preferred form of making modifications to a machine-learning system must include all the elements below:

  • Data Information: Sufficiently detailed information about the data used to train the system so that a skilled person can build a substantially equivalent system.

Data Information shall be made available under OSI-approved terms.
In particular, this must include: (1) the complete description of all data used for training, including (if used) of unshareable data, disclosing the provenance of the data, its scope and characteristics, how the data was obtained and selected, the labeling procedures, and data processing and filtering methodologies; (2) a listing of all publicly available training data and where to obtain it; and (3) a listing of all training data obtainable from third parties and where to obtain it, including for fee.

  • Code: The complete source code used to train and run the system. The Code shall represent the full specification of how the data was processed and filtered, and how the training was done. Code shall be made available under OSI-approved licenses.

For example, if used, this must include code used for processing and filtering data, code used for training including arguments and settings used, validation and testing, supporting libraries like tokenizers and hyperparameters search code, inference code, and model architecture.

  • Parameters: The model parameters, such as weights or other configuration settings. Parameters shall be made available under OSI-approved terms.

For example, this might include checkpoints from key intermediate stages of training as well as the final optimizer state.

The licensing or other terms applied to these elements and to any combination thereof may contain conditions that require any modified version to be released under the same terms as the original.

Open Source models and Open Source weights

For machine learning systems,

  • An AI model consists of the model architecture, model parameters (including weights) and inference code for running the model.
  • AI weights are the set of learned parameters that overlay the model architecture to produce an output from a given input.

The preferred form to make modifications to machine learning systems also applies to these individual components. “Open Source models” and “Open Source weights” must include the data information and code used to derive those parameters.

The Open Source AI Definition does not require a specific legal mechanism for assuring that the model parameters are freely available to all. They may be free by their nature or a license or other legal instrument may be required to ensure their freedom. We expect this will become clearer over time, once the legal system has had more opportunity to address Open Source AI systems.

Definitions

  • AI system2: An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.

  • Machine learning3: is a set of techniques that allows machines to improve their performance and usually generate models in an automated manner through exposure to training data, which can help identify patterns and regularities rather than through explicit instructions from a human. The process of improving a system’s performance using machine learning techniques is known as “training”.

Universal Music partners with AI company building an ‘ethical’ music generator

Universal Music Group (UMG) announced a new deal centered on creating an “ethical” foundational model for AI music generation. It’s partnered with a company called Klay Vision that’s creating a “Large Music Model” named KLayMM and plans to launch out of stealth mode with a product within months. Ary Attie, its founder and CEO, said the company believes “the next Beatles will play with KLAY.”

#universalmusic #music #technology #ai

The two say the model will work “in collaboration with the music industry and its creators,” without many details about how, while Klay plans to make music AI “more than a short-lived gimmick.”

This is how the companies explain their shared goals:

Building generative AI music models ethically and fully respectful of copyright, as well as name and likeness rights, will dramatically lessen the threat to human creators and stand the greatest opportunity to be transformational, creating significant new avenues for creativity and future monetization of copyrights.

As for how whatever it is they’re working on will affect human artists:

KLAY is developing a global ecosystem to host AI-driven experiences and content, including accurate attribution, and will not compete with artists’ catalogs in traditional music services.

UMG’s new partnership comes as it is involved in lawsuits against AI music generator sites and Anthropic, and in May, it ended a short stand-off with TikTok by signing a new licensing arrangement that covered, among other things, AI-generated music.

Klay is also run by chief content officer Thomas Hesse, who was previously Sony Music Entertainment’s president. Former Google Deepmind researcher Björn Winckler, who led the development of Google’s Lyria AI music model, is joining the company as its head of research.

Bitcoin hits $70K amid huge ETF inflow streak

Bitcoin has hit a price of $70,000 as inflows to ETFs in the US notched a six day streak, surpassing $2.4 billion.

The price of Bitcoin has just crossed the $70,000 milestone for the first time since June 10 following two strong weeks of inflows into the United States spot Bitcoin exchange-traded funds (ETFs).

Bitcoin (BTC) jumped by 3% in the last day to a high of $70,150 on Oct. 28 before falling back below $70,000, TradingView data shows.

#bitcoin #crypto #etf #wallStreet

Its gains come alongside another notable rise in inflows into Bitcoin ETFs. As CoinShares reported, Bitcoin funds recorded $920 million in inflows for the week ending Oct. 25, bringing year-to-date inflows to $25.4 billion.

This followed a large steak of inflows into the 11 US spot-based ETFs for the week ending Oct. 18, accumulating over $2.1 billion in net inflows, according to Farside Investors.

Several crypto traders also claim Bitcoin saw a “golden-cross” — a bullish chart pattern in which its 50-day moving average crosses above its 200-day long-term moving average — indicating potential for a potential price breakthrough.

The Small Language Model Revolution: A Guide to Modern AI Efficiency

In the ever-expanding universe of artificial intelligence, a surprising trend is emerging. While industry giants race to build ever-larger language models, a quieter but equally significant revolution is taking place in the realm of Small Language Models (SLMs). These compact but powerful models are reshaping how businesses and developers think about AI deployment, proving that effectiveness isn’t always about size.

#ai #slm #technology

Small Language Models, typically containing fewer than 3 billion parameters, represent a fundamental shift in AI architecture. Unlike their massive counterparts such as GPT-4 or Claude 3, which require extensive computational resources and cloud infrastructure, SLMs are designed for efficiency and specialized performance. This isn’t just about saving resources – it’s about rethinking how AI can be practically deployed in real-world scenarios.

H2O.ai’s Mississippi models exemplify this new approach. The recently released Mississippi-2B, with just 2.1 billion parameters, and its even smaller sibling Mississippi-0.8B, are revolutionizing document processing and OCR tasks. What’s remarkable isn’t just their size, but their performance. The 0.8B version consistently outperforms models 20 times its size on OCRBench.

The secret lies in their architecture. Instead of trying to be generalists, these models employ specialized techniques like 448×448 pixel tiling for image processing, allowing them to maintain high accuracy while keeping computational requirements modest. They’re trained on carefully curated datasets – 17.2 million examples for the 2B version and 19 million for the 0.8B model – focusing on quality over quantity.

This specialized approach pays dividends in real-world applications. For businesses, the advantages are clear: faster processing speeds, lower operational costs, and the ability to run models on standard hardware. But perhaps most importantly, SLMs can often be deployed locally, eliminating the need to send sensitive data to external servers – a crucial consideration for industries like healthcare, finance, and legal services.

The rise of SLMs also challenges the traditional AI development paradigm. Instead of throwing more parameters at problems, developers are focusing on architectural efficiency and targeted training. This shift has led to innovations in model compression, knowledge distillation, and specialized architectures that squeeze maximum performance from minimal resources.

Choosing the Right SLM for Your Needs
The growing landscape of Small Language Models presents both opportunities and challenges for organizations looking to implement AI solutions. Mississippi’s success in document processing demonstrates how specialized SLMs can excel in specific domains, but it also raises important questions about model selection and deployment.

When evaluating SLMs, performance metrics need to be considered in context. While Mississippi’s OCRBench scores are impressive, they’re particularly relevant for document processing tasks. Organizations need to evaluate models based on their specific use cases. This might mean looking at inference speed for real-time applications, accuracy on domain-specific tasks, or resource requirements for edge deployment.

Resource requirements vary significantly even among SLMs. Mississippi’s 0.8B version can run on relatively modest hardware, making it accessible to smaller organizations or those with limited AI infrastructure. However, some “small” models still require substantial computational resources despite their reduced parameter count. Understanding these requirements is crucial for successful deployment.

The deployment environment also matters significantly. Mississippi’s architecture allows for local deployment, which can be crucial for organizations handling sensitive data. Other SLMs might require specific frameworks or cloud infrastructure, impacting both cost and implementation complexity. Organizations need to consider not just the initial deployment but long-term maintenance and scaling requirements.

Integration capabilities represent another crucial consideration. Mississippi’s JSON output capability makes it particularly valuable for businesses looking to automate document processing workflows. However, different SLMs offer different integration options, from simple APIs to more complex custom deployment solutions. The availability of documentation, community support, and integration tools can significantly impact implementation success.

The future of SLMs looks promising, with ongoing research pushing the boundaries of what’s possible with compact models. H2O.ai’s success with Mississippi suggests we’re just beginning to understand how specialized architectures can overcome the limitations of model size. As more organizations recognize the advantages of SLMs, we’re likely to see increased innovation in model efficiency and specialization.

For businesses and developers, the message is clear: bigger isn’t always better in AI. The key is finding the right tool for the job, and increasingly, that tool might be a Small Language Model. As Mississippi demonstrates, with smart architecture and focused training, even modest-sized models can achieve remarkable results. The SLM revolution isn’t just about doing more with less – it’s about doing it better.

What is a Small Language Model?

The characteristics and capabilities of Small Language Models (SLMs).

Size and Architecture

Small language models are typically smaller in size compared to larger language models. This can be measured in several ways, including:

  1. Number of parameters: SLMs usually have between 10 million to 100 million parameters, whereas larger models can have billions of parameters.
  2. Model size: SLMs are often represented as a smaller number of layers, fewer attention heads, and smaller hidden dimensions.
  3. Model architecture: SLMs may employ simpler architectures, such as a smaller number of transformer layers, fewer layers overall, or different types of transformer layers (e.g., smaller attention heads or fewer feed-forward layers).

These smaller sizes and architectures can make SLMs more efficient to train and deploy, but they also limit their capacity to process complex texts and understand nuanced language.

Training Data and Generalization

SLMs are often trained on smaller datasets compared to larger language models. This can result in:

  1. Less generalization: SLMs may not generalize as well to new, unseen data, which can limit their performance on tasks that require a broad understanding of language.
  2. Less robustness: SLMs may be more sensitive to noise, outliers, and other forms of data contamination, which can affect their performance on tasks that require robustness.

However, SLMs can still be trained on a wide range of tasks and domains, and the quality of the training data can have a significant impact on their performance.

Inference and Speed

One of the key advantages of SLMs is their ability to process text input quickly and efficiently. This can make them suitable for:

  1. Real-time applications: SLMs can be used in applications that require rapid response times, such as chatbots, language translation, or text summarization.
  2. Low-latency inference: SLMs can perform inference in a fraction of the time compared to larger models, making them suitable for applications that require fast response times.

Capabilities and Limitations

SLMs can perform well on specific tasks, such as:

  1. Text classification: SLMs can be trained to classify text into categories, such as spam vs. non-spam emails or positive vs. negative reviews.
  2. Sentiment analysis: SLMs can be trained to analyze text and determine the sentiment or emotional tone of the text.
  3. Language translation: SLMs can be trained to translate text from one language to another, although their performance may be limited to specific domains or languages.
  4. Conversational dialogue: SLMs can be trained to engage in simple conversations, although their performance may be limited to specific topics or domains.

However, SLMs are generally not suitable for tasks that require:

  1. High-level understanding: SLMs may not be able to understand complex texts, nuance, or context.
  2. Long-range dependencies: SLMs may not be able to capture long-range dependencies or relationships in text.
  3. Multi-turn dialogue: SLMs may not be able to engage in multi-turn conversations or understand the context of a conversation.
  4. Creative writing or storytelling: SLMs are generally not suitable for tasks that require creative writing or storytelling, as they may not be able to generate novel or coherent text.

Use Cases

SLMs have a wide range of use cases, including:

  1. Chatbots: SLMs can be used to power chatbots that provide customer support, answer questions, or engage in simple conversations.
  2. Language learning platforms: SLMs can be used to provide personalized language learning experiences, such as grammar correction or vocabulary practice.
  3. Content moderation: SLMs can be used to moderate online content, such as detecting spam or hate speech.
  4. Language translation: SLMs can be used to translate text from one language to another, although their performance may be limited to specific domains or languages.

Overall, SLMs offer a balance between speed, accuracy, and cost, making them suitable for a wide range of applications that require efficient and effective language processing.

Let's dive deeper into the process of building a Small Language Model (SLM) using a Large Language Model (LLM) and explore the key components, benefits, and challenges involved.

Pruning

Pruning is a key step in reducing the size and computational requirements of a model. It involves removing unnecessary parameters, weights, or other components that are not essential for the task at hand. There are several techniques used for pruning, including:

  1. Weight pruning: Removing weights that have a low magnitude or are not essential for the task.
  2. Layer pruning: Removing entire layers or sub-layers that are not essential for the task.
  3. Neuron pruning: Removing neurons that are not essential for the task.
  4. Synaptic pruning: Removing synaptic connections that are not essential for the task.

Pruning can be done using various algorithms, including:

  1. L1 norm pruning: Removing weights with a low L1 norm (i.e., weights with a small absolute value).
  2. L2 norm pruning: Removing weights with a low L2 norm (i.e., weights with a small magnitude).
  3. Relu pruning: Removing neurons with a low ReLU (i.e., neurons with a small output value).
  4. Gelu pruning: Removing neurons with a low Gelu (i.e., neurons with a small output value).

Quantization

Quantization is another key step in reducing the size and computational requirements of a model. It involves converting the weights and activations to a lower precision data type, such as 8-bit or 16-bit floating-point numbers. There are several techniques used for quantization, including:

  1. Fixed-point quantization: Converting the weights and activations to fixed-point numbers.
  2. Integer quantization: Converting the weights and activations to integer numbers.
  3. Binary quantization: Converting the weights and activations to binary numbers.
  4. Perceptual quantization: Converting the weights and activations to a lower precision data type based on the perceived quality of the output.

Quantization can be done using various algorithms, including:

  1. K-means quantization: Grouping the weights and activations into k clusters and assigning each cluster to a lower precision data type.
  2. Hierarchical quantization: Quantizing the weights and activations in a hierarchical manner, starting with the most important weights and activations.
  3. Nearest-neighbor quantization: Finding the nearest neighbor in a quantization table and assigning the weight or activation to that neighbor.

Knowledge Distillation

Knowledge distillation is a technique used to transfer knowledge from a larger model to a smaller model. The goal of knowledge distillation is to train the smaller model to mimic the behavior of the larger model on a specific task or dataset. There are several techniques used for knowledge distillation, including:

  1. Temperature scaling: Scaling the temperature of the larger model to reduce its entropy and transfer knowledge to the smaller model.
  2. Soft attention: Using soft attention to guide the smaller model to focus on the most important parts of the input and mimic the behavior of the larger model.
  3. Gradient distillation: Using the gradients of the larger model to train the smaller model to mimic the behavior of the larger model.
  4. Soft output distillation: Using soft output to guide the smaller model to mimic the behavior of the larger model.

Fine-Tuning

Fine-tuning is a technique used to adapt the smaller model to a specific task or dataset. The goal of fine-tuning is to adjust the weights and biases of the smaller model to better fit the task-specific data. There are several techniques used for fine-tuning, including:

  1. Supervised fine-tuning: Training the smaller model on a supervised dataset to adjust the weights and biases.
  2. Unsupervised fine-tuning: Training the smaller model on an unsupervised dataset to adjust the weights and biases.
  3. Self-supervised fine-tuning: Training the smaller model on a self-supervised dataset to adjust the weights and biases.

Key Benefits

The key benefits of using an SLM built using a LLM include:

  1. Improved efficiency: The SLM is typically smaller and more efficient than the original LLM, making it more suitable for real-time applications.
  2. Reduced computational requirements: The SLM requires less computational resources than the original LLM, making it more suitable for deployed systems.
  3. Better performance: The SLM can achieve similar or even better performance than the original LLM on specific tasks or datasets.
  4. Increased flexibility: The SLM can be fine-tuned on a variety of tasks or datasets, making it a more flexible and adaptable model.

Key Challenges

The key challenges of using an SLM built using a LLM include:

  1. Reduced capacity: The SLM has reduced capacity compared to the original LLM, making it less suitable for tasks that require high-level understanding or long-range dependencies.
  2. Increased risk of overfitting: The SLM may be more prone to overfitting due to its reduced capacity and smaller dataset.
  3. Difficulty in fine-tuning: Fine-tuning the SLM can be challenging due to its reduced capacity and smaller dataset.
  4. Difficulty in evaluating performance: Evaluating the performance of the SLM can be challenging due to its reduced capacity and smaller dataset.

Future Directions

Future directions for SLMs include:

  1. More efficient pruning techniques: Developing more efficient pruning techniques to reduce the size and computational requirements of SLMs.
  2. More advanced quantization techniques: Developing more advanced quantization techniques to reduce the size and computational requirements of SLMs.
  3. More effective knowledge distillation techniques: Developing more effective knowledge distillation techniques to transfer knowledge from larger models to smaller models.
  4. More efficient fine-tuning techniques: Developing more efficient fine-tuning techniques to adapt smaller models to specific tasks or datasets.

Jeff Bezos Reportedly Has Secretive "Personal Reasons" for Wanting to Escape to Mars

"There’s no democracy in space."

At some point, Washington Post owner Jeff Bezos reportedly made a cryptic admission to a power broker — and that strange comment is taking on new significance in light of another recent message he's sending to the public.

#jeffbezos #space #blueorigin

Damn that feels a but dark. Otherwise I support the idea that we should build colonize on other planets in order to

Bezos is a dark guy. He isnt for humanity. Look at what he owns and how he operates.

He is a typical elite. People like him feel they are better to run society than the population.

So weird that we see this phenomenon through history. Elites have always behaved and thinked the same.

F*ck the peasents

To be fair, most, as we see on here, are lazy, not willing to learn, and looking for handouts.

Few are the ones who actually do something to impact, well, much of anything.

As they say, nobody ever erected a statue for a critic.

Following WaPo's surprise decision not to endorse either candidate for president — reportedly because its billionaire owner vetoed staff's decision to name Kamala Harris as its pick — New Yorker journalist Sarah Larson recounted her own Bezos lore.

"Once again I’m reflecting on the time I interviewed a powerful guy who knows Jeff Bezos," she wrote on X, "and who offhandedly told me, 'Jeff has personal reasons for wanting to get to Mars... I’m not comfortable sharing what they are.'"

Larson jokingly followed up her own tweet with a seeming reference to the newspaper's tagline, "Democracy dies in darkness," which was taken up in the aftermath of Donald Trump's first presidential win in 2016.

"There’s no democracy in space," the New Yorker writer quipped.

Larson's has an impressive portfolio at the magazine, but the fact that she's not naming her source makes it impossible to know which powerful Bezos acquaintance she's referring to — or what Bezos' secretive personal reasons may be, for that matter.

Bezos has seemed to distance himself from Martian colonizing ambitions as his rival apparent, Elon Musk, goes all-in on the vision.

Years after stepping down as CEO of Amazon to spend more time on his space launch company, Blue Origin, the billionaire told podcaster Lex Fridman that he thinks that humans will likely live inside massive cylindrical space stations.

A little bit more snow coming ;)
#winter #newfoundland #snow

oh snow!! We haven’t been buried yet, enjoy all that floof lol!

Good morning/afternoon Professor B!

!PIZZA

M/A Special K :)

Mostly all melted now :) Thank goodness. I hates it at my old age. White on my face is enough. lol

!BBH !DOOK


You just got DOOKed!
@bradleyarrow thinks your content is the shit.
They have 20/60 DOOK left to drop today.
dook_logo
Learn all about this shit in the toilet paper! 💩

So, no snowball fights with you, got it! 😉 😆

Oh, no, I love a good snowball fight ;)

@generikat! @bradleyarrow likes your content! so I just sent 1 BBH to your account on behalf of @bradleyarrow. (21/100)

(html comment removed: )

There are 4 pages
Pages