Facebook, Amazon etc....

berbatrick

Renaissance Man
Scout
Joined
Oct 22, 2010
Messages
21,645
FOR SOME REASON, WE CAN’T FIND A SINGLE LEFTIST MARK ZUCKERBERG INVITED TO HIS DINNERS WITH PUNDITS FROM “ACROSS THE SPECTRUM”
DURING A CONGRESSIONAL hearing on Wednesday, Rep. Alexandria Ocasio-Cortez asked Mark Zuckerberg about his “ongoing dinner parties with far-right figures.”

This was terribly unfair to Mark, and Ocasio-Cortez owes him an apology. Yes, as Politico recently reported, he’s been holding lots of private get-togethers with prominent hard-right media figures. According to the article, these include Tucker Carlson of Fox News; talk show host Hugh Hewitt; Ben Shapiro; former Free Beacon editor Matt Continetti; and Brent Bozell, founder of the Media Research Center, which exists “to expose and neutralize the propaganda arm of the Left: the national news media.”

But this isn’t because Mark is cultivating right-wingers specifically. Rather, as he explained on Facebook, he just loves to have dinner with “lots of people across the spectrum on lots of different issues all the time.”

...
What can we say about this?

Cynics might tell you that Zuckerberg, the fifth-richest person on Earth and head of a giant international conglomerate, is largely sympathetic to the corporate right. According to a Bloomberg News analysis, the 2017 GOP tax bill saved Facebook $8.3 billion in just one year.

The same cynics might mention that Facebook’s first outside investor was Peter Thiel, who now serves on the company’s board — and is one of just two members of its compensation and governance committee. Thiel, an outspoken supporter of President Donald Trump, secretly funded a lawsuit that destroyed the news outlet Gawker.

These cynical cynics would point out that, despite constant accusations from the right-wing media that Facebook “silences” conservatives, the right-wing media is wildly popular on Facebook. They’d say this is no surprise, since Facebook’s vice president for U.S. public policy is Joel Kaplan, a former aide to George W. Bush and current member of the board of Bush’s presidential museum. According to the Wall Street Journal, Kaplan has “wielded his influence to postpone or kill projects that risk upsetting conservatives.”

But the most likely explanation here is that Mark, with his intense quest for knowledge, did in fact share invigorating meals with these leftist individuals and organizations, and they’ve all just forgotten. They’re pretty busy; it’s easy to see how that would happen.
https://theintercept.com/2019/10/25...C2CToM-2WbMCDUyBLRPBcQ-fMoHvPfvjAG7LMYFGxaV8w
 

Suedesi

Full Member
Joined
Aug 3, 2001
Messages
23,873
Location
New York City


There you have it. He backward analyzed data at his quant hedge fund. Web was growing at 2300% annually, books had the most categories of any other product. He found wholesalers that would be a warehouse and put it all together. You've got to admire the vision, clarity of thinking, laser sharp focus and optimism.

Now f--koff into the sunset Bezos.
 

MTF

Full Member
Joined
Aug 17, 2009
Messages
5,243
Location
New York City
These are massive monopolies. Hopefully the next Prez breaks up big tech.
What exactly? Facebook needs to divest WhatsApp and Instagram, Amazon needs to spin-off AWS, and Google.... what? They'd still have essentially the same positions even after that.
 

Raoul

Admin
Staff
Joined
Aug 14, 1999
Messages
130,183
Location
Hollywood CA
What exactly? Facebook needs to divest WhatsApp and Instagram, Amazon needs to spin-off AWS, and Google.... what? They'd still have essentially the same positions even after that.
I'll leave that to the pros. ;)

It is stifling competition to have these companies get this big and powerful.
 

VorZakone

What would Kenny G do?
Joined
May 9, 2013
Messages
32,907
Their dominance was always going to happen IMO. Take Google for example, who wants 5 search engines? It was always gonna be the most convenient to have 1 search engine.
 

MTF

Full Member
Joined
Aug 17, 2009
Messages
5,243
Location
New York City
I'll leave that to the pros. ;)

It is stifling competition to have these companies get this big and powerful.
Time takes care of these things. We're too over-focused because we live it now. Facebook will continue to be profitable for years to come, but I don't see decades long preeminence for them if their innovation continues to be so lackluster. It would just become like any other of the large global companies that exist, like Coke or P&G.
 

Abizzz

Full Member
Joined
Mar 28, 2014
Messages
7,637
This is outrageous... fecking Apple and their 'newly designed' card.
They know he earns all the money and maybe they also know / expect a divorce is on the horizon. Might be like that famous Walmart example where they knew of teenage pregnancies before the families did. Wouldn't surprise me if they had solid data on why he's a lot more creditworthy but can't use it in their defense because they don't want people knowing they have it.
 
Last edited:

Revan

Assumptionman
Joined
Dec 19, 2011
Messages
49,625
Location
London
This is outrageous... fecking Apple and their 'newly designed' card.
Aren't these things decided by totally uninterpretable data-driven algorithms? And the problem with bias in data is a well-known problem in AI, with many researchers working hard to fix (or mitigate) it.
 

Suedesi

Full Member
Joined
Aug 3, 2001
Messages
23,873
Location
New York City
Aren't these things decided by totally uninterpretable data-driven algorithms? And the problem with bias in data is a well-known problem in AI, with many researchers working hard to fix (or mitigate) it.
Data driven algos are clearly designed wrong then. And the problem is you can't really appeal it or reason with it - it's a black box, "computer says no" kinda thing.

Facts are: Husband and wife comprise the same household, file jointly, have the same credit worthiness, she actually happens to have a higher credit score and she gets a lower credit line.

Apple hiding after Goldman or Algos or TransUnion is not a good look.
 

Cheesy

Bread with dipping sauce
Scout
Joined
Oct 16, 2011
Messages
36,181
Time takes care of these things. We're too over-focused because we live it now. Facebook will continue to be profitable for years to come, but I don't see decades long preeminence for them if their innovation continues to be so lackluster. It would just become like any other of the large global companies that exist, like Coke or P&G.
Facebook's influence in the life of the average individual though is quite clearly a lot more than that of Coca-Cola, due to the extent they now control information and hold onto personal data. News sites rely on Facebook for their stories to be read by a mass audience online; and politicians rely on it to promote their message to the wider public. I'd argue that's a more powerful position to be in than that of a major fast food or drinks company.
 

Suedesi

Full Member
Joined
Aug 3, 2001
Messages
23,873
Location
New York City
Next in Google’s Quest for Consumer Dominance: Banking
Search giant plans to partner with banks to offer checking accounts
Google will soon offer checking accounts to consumers, becoming the latest Silicon Valley heavyweight to push into finance. The project, code-named Cache, is expected to launch next year with accounts run by Citigroup Inc. and a credit union at Stanford University, a tiny lender in Google’s backyard. Big tech companies see financial services as a way to get closer to users and glean valuable data. Apple Inc. introduced a credit card this summer. Amazon.com Inc. has talked to banks about offering checking accounts. Facebook Inc. is working on a digital currency it hopes will upend global payments.



This is awesome. Google already knows algorithmically what you want to buy, now they can go ahead and process the payment!
 

Pexbo

Winner of the 'I'm not reading that' medal.
Joined
Jun 2, 2009
Messages
68,692
Location
Brizzle
Supports
Big Days
Data driven algos are clearly designed wrong then. And the problem is you can't really appeal it or reason with it - it's a black box, "computer says no" kinda thing.

Facts are: Husband and wife comprise the same household, file jointly, have the same credit worthiness, she actually happens to have a higher credit score and she gets a lower credit line.

Apple hiding after Goldman or Algos or TransUnion is not a good look.
They’re not designed wrong the actual algorithms are neutral, it’s just incredibly difficult to train them without bias.
 

MTF

Full Member
Joined
Aug 17, 2009
Messages
5,243
Location
New York City
Facebook's influence in the life of the average individual though is quite clearly a lot more than that of Coca-Cola, due to the extent they now control information and hold onto personal data. News sites rely on Facebook for their stories to be read by a mass audience online; and politicians rely on it to promote their message to the wider public. I'd argue that's a more powerful position to be in than that of a major fast food or drinks company.
I agree, but we were talking about breaking it up akin to what you would do to monopolies. But in the case of Facebook there's nothing to break up besides potentially divesting the two other platforms that they acquired, but that still doesn't solve what you're talking about which is inherent to the core platform. In other words, it's much more a regulatory problem than an anti-trust problem. People proposing using an anti-trust framework are holding the wrong tool.

And I still maintain that enough time often solves situations that we might find particularly unfair at any given time. Although usually replacing it with other injustices...
 

Revan

Assumptionman
Joined
Dec 19, 2011
Messages
49,625
Location
London
Data driven algos are clearly designed wrong then. And the problem is you can't really appeal it or reason with it - it's a black box, "computer says no" kinda thing.

Facts are: Husband and wife comprise the same household, file jointly, have the same credit worthiness, she actually happens to have a higher credit score and she gets a lower credit line.

Apple hiding after Goldman or Algos or TransUnion is not a good look.
I work on the field, and it is a very big problem. Some of the top ML scientists are working on the problem, trying to mitigate the problem.

Main problem is that the algos which works really well are very much data driven, and essentially they learn the data. And the data is biased, so by default the algos are biased too.
 

Suedesi

Full Member
Joined
Aug 3, 2001
Messages
23,873
Location
New York City
I work on the field, and it is a very big problem. Some of the top ML scientists are working on the problem, trying to mitigate the problem.

Main problem is that the algos which works really well are very much data driven, and essentially they learn the data. And the data is biased, so by default the algos are biased too.
I really don't follow. How can the data be biased, data is by definition a raw input. How you parse, pattern-fit, design, manipulate the raw data (input) to come out with a decision (output) is what the problem seems to be, not the raw data. More explanation needed.
 

Conor

Full Member
Joined
Feb 19, 2011
Messages
5,556
I really don't follow. How can the data be biased, data is by definition a raw input. How you parse, pattern-fit, design, manipulate the raw data (input) to come out with a decision (output) is what the problem seems to be, not the raw data. More explanation needed.
The world is biased, so data coming from the world we live in will just perpetuate those biases. Nobody is out there writing algorithms that purposefully create biased decisions(I hope).
 

Abizzz

Full Member
Joined
Mar 28, 2014
Messages
7,637
There is a difference between data being "biased" and data showing things the person viewing doesn't want to see because of their political inclinations.
 

Balljy

Full Member
Joined
Jan 31, 2016
Messages
3,325
I really don't follow. How can the data be biased, data is by definition a raw input. How you parse, pattern-fit, design, manipulate the raw data (input) to come out with a decision (output) is what the problem seems to be, not the raw data. More explanation needed.
There was one in the news recently for the apple credit card where a married couple who equally owned everything ended up with the husband being given 20x the credit limit of his wife. This wasn't deliberate sexism as an algorithm had worked it out, but was due to bias in the historic data. The computer therefore sent out a biased result in-line with the previous data.

To get an impartial result they would have to actually put work into the algorithm to counter the bias in the historic data.

https://www.bbc.co.uk/news/business-50432634
 

Revan

Assumptionman
Joined
Dec 19, 2011
Messages
49,625
Location
London
I really don't follow. How can the data be biased, data is by definition a raw input. How you parse, pattern-fit, design, manipulate the raw data (input) to come out with a decision (output) is what the problem seems to be, not the raw data. More explanation needed.
The data is definitely biased. Men earn much more than women, in general. So an algorithm trained in real world data might learn to give higher credit scores to men.

Similarly, the Facebook classifieer classified African people as gorilla. It wasn't malicious, just that on the training set there weren't many African people, so the algorithm learned that black = gorilla. It got corrected and works fine now, by just re-training it on a different training set that is less biased. The Microsoft Twitter bot whom went racist immediately wasn't a malicious too, just that internet is full of racism and hate speech, and internet trolls deliberately were throwing racism at it.

Most of the current machine learning algorithms (wrongly called AI) are models which are very data-centric and the models essentially are just learning the data, they are a compressed version of the dataset they are trained on. And almost always, there is bias on data, sometimes more sometimes less. It is a very hard problem, on removing the bias on data without harming the performance of the overall algorithm. Until a few years ago, people weren't even aware on how much a problem there is. Now we know that it is a problem, at times even a deal breaker, but the solution is not straightforward.
 

Conor

Full Member
Joined
Feb 19, 2011
Messages
5,556
There is a difference between data being "biased" and data showing things the person viewing doesn't want to see because of their political inclinations.
What does this even mean?
 

Abizzz

Full Member
Joined
Mar 28, 2014
Messages
7,637
What does this even mean?
Data isn't "biased" if it correctly shows a biased society. Data is biased if the way you measure it distorts what you are actually measuring. If you want the data on pay to change you need to change society, not "unbias" the data.
 

Conor

Full Member
Joined
Feb 19, 2011
Messages
5,556
Data isn't "biased" if it correctly shows a biased society. Data is biased if the way you measure it distorts what you are actually measuring. If you want the data on pay to change you need to change society, not "unbias" the data.
Not sure what any of that has to do with 'political inclinations', but if you are designing machine learning algorithms to help shape the future, and all current data is portraying a biased society, you are going to need to account for that in some way during the design, if you want to change how society works going forward.
 

Abizzz

Full Member
Joined
Mar 28, 2014
Messages
7,637
Not sure what any of that has to do with 'political inclinations', but if you are designing machine learning algorithms to help shape the future, and all current data is portraying a biased society, you are going to need to account for that in some way during the design, if you want to change how society works going forward.
The political inclination is the "change how society works going forward" part, at least in my mind.

Don't get me wrong, I want a racism free, sexism free world in the future too. I just don't think manipulating data to achieve machine learning that changes the world is the way forward here. We need to change the attitudes in heads & hearts that lead to that data via actual policy, education, dialog. Not by masking it in algorithms.
 

Conor

Full Member
Joined
Feb 19, 2011
Messages
5,556
The political inclination is the "change how society works going forward" part, at least in my mind.

Don't get me wrong, I want a racism free, sexism free world in the future too. I just don't think manipulating data to achieve machine learning that changes the world is the way forward here. We need to change the attitudes in heads & hearts that lead to that data via actual policy, education, dialog. Not by masking it in algorithms.
If institutions are using ML to make decisions, then I don't understand how you are going to impact those decisions without doing it(I don't think manipulating raw data would be how it's done). A mortgage adviser can have whatever social opinions they want; if at the the of it all they are being told by a program on their computer whether to approve/deny, nothing will change.
 

Abizzz

Full Member
Joined
Mar 28, 2014
Messages
7,637
If institutions are using ML to make decisions, then I don't understand how you are going to impact those decisions without doing it. A mortgage adviser can have whatever social opinions they want; if at the the of it all they are being told by a program on their computer whether to approve/deny, nothing will change.
Ah yeah, in those cases I fully agree, if you don't they will just reinforce disadvantages for those already disadvantaged. I'm just very skeptical if it can really be done in a "neutral" way and have thus become very critical of the usage of such systems at critical points in society. (Policing, uni admission etc.)
 

Suedesi

Full Member
Joined
Aug 3, 2001
Messages
23,873
Location
New York City
The data is definitely biased. Men earn much more than women, in general. So an algorithm trained in real world data might learn to give higher credit scores to men.

Similarly, the Facebook classifieer classified African people as gorilla. It wasn't malicious, just that on the training set there weren't many African people, so the algorithm learned that black = gorilla. It got corrected and works fine now, by just re-training it on a different training set that is less biased. The Microsoft Twitter bot whom went racist immediately wasn't a malicious too, just that internet is full of racism and hate speech, and internet trolls deliberately were throwing racism at it.

Most of the current machine learning algorithms (wrongly called AI) are models which are very data-centric and the models essentially are just learning the data, they are a compressed version of the dataset they are trained on. And almost always, there is bias on data, sometimes more sometimes less. It is a very hard problem, on removing the bias on data without harming the performance of the overall algorithm. Until a few years ago, people weren't even aware on how much a problem there is. Now we know that it is a problem, at times even a deal breaker, but the solution is not straightforward.
Well, that's not actually the case, is it? We're talking about a married couple with equal shares in property who file joint tax returns. In fact, her credit score is higher due to certain factors prior to marriage. On top of that, she's a US citizen whereas he's a felony away from being deported. So the data, shows that she's a better credit, yet the algo grants him 20x the amount of credit.
 

Revan

Assumptionman
Joined
Dec 19, 2011
Messages
49,625
Location
London
Well, that's not actually the case, is it? We're talking about a married couple with equal shares in property who file joint tax returns. In fact, her credit score is higher due to certain factors prior to marriage. On top of that, she's a US citizen whereas he's a felony away from being deported. So the data, shows that she's a better credit, yet the algo grants him 20x the amount of credit.
If you're trying to totally ignore whatever I have been explaining to you in the last 3 posts, then kudos to you, great job.
 

Suedesi

Full Member
Joined
Aug 3, 2001
Messages
23,873
Location
New York City
If you're trying to totally ignore whatever I have been explaining to you in the last 3 posts, then kudos to you, great job.
Alternatively, you've done a piss poor job of explaining yourself.

You could have said that the bias in machine learning is defined as the phenomena of observing results that are systematically prejudiced due to faulty assumptions.

You could have added that the data used to train the algorithm is finite, and therefore does not reflect reality.

This also results in bias which arises from the choice of training and test data and their representation of the true population.

So while algos can be a reflection of the population they are attempting to model, there should be constraints to ensure that it works not just to entrench the existing inequalities in our society.
 

Conor

Full Member
Joined
Feb 19, 2011
Messages
5,556
Well, that's not actually the case, is it? We're talking about a married couple with equal shares in property who file joint tax returns. In fact, her credit score is higher due to certain factors prior to marriage. On top of that, she's a US citizen whereas he's a felony away from being deported. So the data, shows that she's a better credit, yet the algo grants him 20x the amount of credit.
You're literally explaining why data bias makes these algorithms unfair :lol: