SARS CoV-2 coronavirus / Covid-19 (No tin foil hat silliness please)

massi83

Full Member
Joined
Feb 2, 2009
Messages
2,596
Similar to UK and Spain would still have it amongst the worst. I’m just curious if the Belgian medics protesting are justifiably pissed off or if the Belgian government might be getting unfairly criticised when they’ve just been unlucky.
That's true. Numbers are going down nicely, but I have no idea if they made big errors in March. It is a really dense country that probably has an effect.
 

Pogue Mahone

The caf's Camus.
Joined
Feb 22, 2006
Messages
134,268
Location
"like a man in silk pyjamas shooting pigeons
That's true. Numbers are going down nicely, but I have no idea if they made big errors in March. It is a really dense country that probably has an effect.
Civil unrest really worries me. Especially if this drags on and on. Irrational anger towards governments that didn’t make the same mistakes as everyone else - when faced with unprecedented challenges - would be a big concern. However things pan out in Sweden, if they’re keeping the citizens onside then they’re doing something right anyway.
 

massi83

Full Member
Joined
Feb 2, 2009
Messages
2,596
Civil unrest really worries me. Especially if this drags on and on. Irrational anger towards governments that didn’t make the same mistakes as everyone else - when faced with unprecedented challenges - would be a big concern. However things pan out in Sweden, if they’re keeping the citizens onside then they’re doing something right anyway.
Definitely. But I don't have much to add to that (important) conversation. And try to worry about one thing at a time :)
 

TMDaines

Fun sponge.
Joined
Sep 1, 2014
Messages
14,018
Got a link, I'd like to read the articles.

Its a false dichotomy as those are not the only alternatives.
My point is that the English people (in my lifetime at least) have been reluctant to embrace leadership as coming anywhere from Westminster. All manner of local and European elections get poor turnouts and no one has really run a successful national political campaign on devolving serious power to regions. Any attempt to truly regionalise the NHS would be seen as dismantling it, despite its being a national health service that actually hamstrings it from best meeting the needs of its local populaces. Local governments have pathetically little power. We have supposed health and social care devolution in Greater Manchester, but all we really do is try and meet the top-down strategy imposed by national leaders, whilst tinkering round the edges.

Andy Burnham’s opinion piece is here: https://www.theguardian.com/comment...ether-it-doesnt-look-like-it-from-the-regions

This morning’s lead is here: https://www.theguardian.com/busines...own-spreads-as-poll-slump-hits-prime-minister
 

Brwned

Have you ever been in love before?
Joined
Apr 18, 2008
Messages
50,850
Has anyone seen any evidence of this economic upside?
The only real data we have so far is for Q1 2020, in which Sweden's economy declined by 0.3% while the UK's fell by 2%, the USA's fell by 4.8%, the EU's fell by 3.5% or Norway's fell by 1.9%. So things looked better before the crisis really kicked into gear.

Beyond that you've just got estimates, and Sweden's central bank (the Riksbank) estimate GDP will decline by 7-10% this year while the European commission estimate that Sweden's GDP will fall by 6% (as will Denmark and Finland, compared to e.g. 8% in the UK). But a lot of economists have pointed out they're not epidemiologists so their assumptions about how the economy will recover are based on very uncertain assumptions about the virus' progression.

I think one major difference is that Sweden's government haven't been paying a large chunk of businesses' employees during the crisis, in the way Denmark or the UK have, nor have they had a huge rise in unemployment claims in the way the US has. So as I understand it they won't come out of the worst of the crisis with a boatload of new debt, which you'd think would help them in trying to stimulate the economy afterwards. There's a limit to what they can do though, as they're an export-oriented economy and that's what hurt them most during the global financial crisis.
 
Joined
Jul 31, 2015
Messages
23,019
Location
Somewhere out there
I think one major difference is that Sweden's government haven't been paying a large chunk of businesses' employees during the crisis, in the way Denmark or the UK have, nor have they had a huge rise in unemployment claims in the way the US has.
hmmmm....

https://www.aftonbladet.se/nyheter/a/K34eL5/grafik-sa-okar-arbetslosheten

Been a huge amount furloughed also, 350,000 at the end of April.

Sweden didn’t chose this strategy to save the economy, the health ministry don’t really care about that, they chose this path because they believed the virus was too widespread over Europe to stop and believed they could flatten the curve without lockdown.

What will ultimately prove Sweden’s health ministry wrong is how the likes of Denmark do over the next 12 months or so, and also how the likes of the Skåne (Malmö) region develops.
 
Last edited:

VP89

Pogba's biggest fan
Joined
Dec 6, 2015
Messages
31,950
Does anyone know if the covid antobidy test by Vazyme (I think it's under the company "Stratech") is accurate?

There are quire a few tests I think there's an opportunity to have a Vazyme test but no real point if it's not approved.
 

Zexstream

Anti-anti-racist
Joined
Aug 31, 2011
Messages
2,095
Almost a third of negative coronavirus tests could be WRONG: Expert warns it’s ‘dangerous’ to rely on test results as part of lockdown exit strategy
  • False negatives would mainly be the fault of incorrect swabbing for tests
  • Scientists say they are bound to be mistakes in around 10-30% of swabs
  • They say it is 'dangerous' to rely on test results to make decisions on the crisis
  • It follows warnings to ministers that one in four cases are being missed because the symptom list is not broad enough, only noting a cough and high temperature
https://www.dailymail.co.uk/news/ar...tml?ito=push-notification&ci=15428&si=6552952
 

groovyalbert

it's a mute point
Joined
Feb 14, 2013
Messages
9,749
Location
London
Almost a third of negative coronavirus tests could be WRONG: Expert warns it’s ‘dangerous’ to rely on test results as part of lockdown exit strategy
  • False negatives would mainly be the fault of incorrect swabbing for tests
  • Scientists say they are bound to be mistakes in around 10-30% of swabs
  • They say it is 'dangerous' to rely on test results to make decisions on the crisis
  • It follows warnings to ministers that one in four cases are being missed because the symptom list is not broad enough, only noting a cough and high temperature
https://www.dailymail.co.uk/news/ar...tml?ito=push-notification&ci=15428&si=6552952
Worth noting that a false negative doesn't equate to a positive. It's an invalid test.

And it's impossible to know when you're encountering a false negative test. So all this really suffices to suggest is how hopeless it is to base any policy on the statistical evidence available at the moment. I don't know what else you can go on at this moment in time though, you have to put faith in the results at hand and allow for a wider variation in predictions than is ideally preferable.

It's probably worth noting the number of deaths that have occurred which are marked as suspected Covid's without any testing being done. Now, I don't know if these deaths are added to daily fatality figures, but I imagine all the data people are working off is hardly the most reliable variant regardless of the findings/analysis they're performing.
 

Pogue Mahone

The caf's Camus.
Joined
Feb 22, 2006
Messages
134,268
Location
"like a man in silk pyjamas shooting pigeons
Does anyone know if the covid antobidy test by Vazyme (I think it's under the company "Stratech") is accurate?

There are quire a few tests I think there's an opportunity to have a Vazyme test but no real point if it's not approved.
I think every test developed so far that uses blood from the tip of your finger is pretty much useless. Their specificity doesn’t seem to get any higher than 95%. This means that if you test a 100 people, you will get 5 false positive results (i.e. tests says they have antibiodies but test is wrong) Bearing in mind the true prevalence is about 5% you’re only going to get 5 out of that 100 with a correct positive result.

So that’s a 50:50 chance for each individual with a positive test to trust that result. You might as well toss a coin!

The only tests I’ve read about with a 99%+ specificity require a venous sample i.e. can’t be done at home.
 

VP89

Pogba's biggest fan
Joined
Dec 6, 2015
Messages
31,950
I think every test developed so far that uses blood from the tip of your finger is pretty much useless. Their specificity doesn’t seem to get any higher than 95%. This means that if you test a 100 people, you will get 5 false positive results (i.e. tests says they have antibiodies but test is wrong) Bearing in mind the true prevalence is about 5% you’re only going to get 5 out of that 100 with a correct positive result.

So that’s a 50:50 chance for each individual with a positive test to trust that result. You might as well toss a coin!

The only tests I’ve read about with a 99%+ specificity require a venous sample i.e. can’t be done at home.
Thanks Pogue, appreciate it. We've all had these tests done now at a relatives. It seems weird though because my wife was tested positive 1 month ago as she worked in ICU, and yet everyone from the same household including me were negative.

And yeah it was a finger print test. Pretty awkward now because my parents (also tested negative) are finally spending time in the garden with their daughter (tested positive for antigen)..and they are under a false premise its OK :(
 

NYAS

Full Member
Joined
Dec 25, 2012
Messages
4,324
Nothing quite encapsulates how global the Caf is better than when @Revan in California and @Wibble in Australia work the night shift and keep this thread going through the wee hours. Love it.
 

Pogue Mahone

The caf's Camus.
Joined
Feb 22, 2006
Messages
134,268
Location
"like a man in silk pyjamas shooting pigeons
Thanks Pogue, appreciate it. We've all had these tests done now at a relatives. It seems weird though because my wife was tested positive 1 month ago as she worked in ICU, and yet everyone from the same household including me were negative.

And yeah it was a finger print test. Pretty awkward now because my parents (also tested negative) are finally spending time in the garden with their daughter (tested positive for antigen)..and they are under a false premise its OK :(
Potentially false. A positive test still gives a MUCH higher chance of antibodies than a negative one (or no test at all) 50:50 chance of having been previously infected is a hell of a lot better than the <1 in 20 chance had she not tested positive. There’s so many uncertainties in all of this. All that any of us can do is take sensible risks and that might be a sensible risk your parents think is worth taking. Even less of a risk when they’re just hanging out in the garden. I reckon mine would do the same.
 

mikey_d

Full Member
Joined
May 7, 2008
Messages
1,500
Location
wales
Worth noting that a false negative doesn't equate to a positive. It's an invalid test.

And it's impossible to know when you're encountering a false negative test. So all this really suffices to suggest is how hopeless it is to base any policy on the statistical evidence available at the moment. I don't know what else you can go on at this moment in time though, you have to put faith in the results at hand and allow for a wider variation in predictions than is ideally preferable.

It's probably worth noting the number of deaths that have occurred which are marked as suspected Covid's without any testing being done. Now, I don't know if these deaths are added to daily fatality figures, but I imagine all the data people are working off is hardly the most reliable variant regardless of the findings/analysis they're performing.
I know what you're trying to say re a false negative doesn't equate all being positive. You mean that all the tests that have been administered incorrectly are not necessarily positive, and that they are in fact invalid tests.
However your terminology isn't correct.

By definition a false negative is where someone positive tests negative.

All tests have an inherent false negative and positive rate. In this kind of test one of the contributors to the false negative rate is invalidated tests due to swabs being taken incorrectly.

However what you're describing is the tests failure rate. We know there is a high false negative rate from the swabs for multiple reasons however this is confounded by the high failure rate of the test. The difficulty is there is no mechanism to identify failed tests.
 

sammsky1

Pochettino's #1 fan
Joined
Feb 10, 2008
Messages
32,841
Location
London
If in the end, we fail to get a vaccine, I guess the life will go on, like it has gone for many other diseases (Spanish flu, smallpox etc). Tens of millions will die during the years (maybe even in the region of hundreds of millions) with the virus coming and going like many other seasonal viruses. Those that get it and survive become immune (for at least some time) in turn limiting the spread of the virus, though next year would be even worse than this one.
I think most on the planet are entrapped in some sort of cognitive denial right now. Even if we get a vaccine, I doubt masses will get access to it before next winter, when presumably covid19 will regain its chops and spread widely again. Though one can only hope that by adopting stricter and conscious social distancing, the impact might not be as bad as round 1?
 
Last edited:

sammsky1

Pochettino's #1 fan
Joined
Feb 10, 2008
Messages
32,841
Location
London
Neil Ferguson's Imperial model could be the most devastating software mistake of all time
David Richards and Konstantin Boudnik 16 May 2020 https://www.telegraph.co.uk/technol...ial-model-could-devastating-software-mistake/
In the history of expensive software mistakes, Mariner 1 was probably the most notorious. The unmanned spacecraft was destroyed seconds after launch from Cape Canaveral in 1962 when it veered dangerously off-course due to a line of dodgy code.
But nobody died and the only hits were to Nasa’s budget and pride. Imperial College’s modelling of non-pharmaceutical interventions for Covid-19 which helped persuade the UK and other countries to bring in draconian lockdowns will supersede the failed Venus space probe and could go down in history as the most devastating software mistake of all time, in terms of economic costs and lives lost.
Since publication of Imperial’s microsimulation model, those of us with a professional and personal interest in software development have studied the code on which policymakers based their fateful decision to mothball our multi-trillion pound economy and plunge millions of people into poverty and hardship. And we were profoundly disturbed at what we discovered. The model appears to be totally unreliable and you wouldn’t stake your life on it.
First though, a few words on our credentials. I am David Richards, founder and chief executive of WANdisco, a global leader in Big Data software that is jointly headquartered in Silicon Valley and Sheffield. My co-author is Dr Konstantin ‘Cos’ Boudnik, vice-president of architecture at WANdisco, author of 17 US patents in distributed computing and a veteran developer of the Apache Hadoop framework that allows computers to solve problems using vast amounts of data.
Imperial’s model appears to be based on a programming language called Fortran, which was old news 20 years ago and, guess what, was the code used for Mariner 1. This outdated language contains inherent problems with its grammar and the way it assigns values, which can give way to multiple design flaws and numerical inaccuracies. One file alone in the Imperial model contained 15,000 lines of code.
Try unravelling that tangled, buggy mess, which looks more like a bowl of angel hair pasta than a finely tuned piece of programming. Industry best practice would have 500 separate files instead. In our commercial reality, we would fire anyone for developing code like this and any business that relied on it to produce software for sale would likely go bust.
The approach ignores widely accepted computer science principles known as "separation of concerns", which date back to the early 70s and are essential to the design and architecture of successful software systems. The principles guard against what developers call CACE: Changing Anything Changes Everything.
Without this separation, it is impossible to carry out rigorous testing of individual parts to ensure full working order of the whole. Testing allows for guarantees. It is what you do on a conveyer belt in a car factory. Each and every component is tested for integrity in order to pass strict quality controls.
Only then is the car deemed safe to go on the road. As a result, Imperial’s model is vulnerable to producing wildly different and conflicting outputs based on the same initial set of parameters. Run it on different computers and you would likely get different results. In other words, it is non-deterministic.
As such, it is fundamentally unreliable. It screams the question as to why our Government did not get a second opinion before swallowing Imperial's prescription.
Ultimately, this is a computer science problem and where are the computer scientists in the room? Our leaders did not have the grounding in computer science to challenge the ideas and so were susceptible to the academics. I suspect the Government saw what was happening in Italy with its overwhelmed hospitals and panicked.
It chose a blunt instrument instead of a scalpel and now there is going to be a huge strain on society. Defenders of the Imperial model argue that because the problem - a global pandemic - is dynamic, then the solution should share the same stochastic, non-deterministic quality.
We disagree. Models must be capable of passing the basic scientific test of producing the same results given the same initial set of parameters. Otherwise, there is simply no way of knowing whether they will be reliable.
Indeed, many global industries successfully use deterministic models that factor in randomness. No surgeon would put a pacemaker into a cardiac patient knowing it was based on an arguably unpredictable approach for fear of jeopardising the Hippocratic oath. Why on earth would the Government place its trust in the same when the entire wellbeing of our nation is at stake?

David Richards, founder and chief executive of WANdisco and Dr Konstantin Boudnik is the company's vice-president of architecture



Coding that led to lockdown was 'totally unreliable' and a 'buggy mess', say experts
By Hannah Boland and Ellie Zolfagharifard 16 May 2020 https://www.telegraph.co.uk/technol...wn-totally-unreliable-buggy-mess-say-experts/
The Covid-19 modelling that sent Britain into lockdown, shutting the economy and leaving millions unemployed, has been slammed by a series of experts.
Professor Neil Ferguson's computer coding was derided as “totally unreliable” by leading figures, who warned it was “something you wouldn’t stake your life on".
The model, credited with forcing the Government to make a U-turn and introduce a nationwide lockdown, is a “buggy mess that looks more like a bowl of angel hair pasta than a finely tuned piece of programming”, says David Richards, co-founder of British data technology company WANdisco.
“In our commercial reality, we would fire anyone for developing code like this and any business that relied on it to produce software for sale would likely go bust.”
The comments are likely to reignite a row over whether the UK was right to send the public into lockdown, with conflicting scientific models having suggested people may have already acquired substantial herd immunity and that Covid-19 may have hit Britain earlier than first thought. Scientists have also been split on what the fatality rate of Covid-19 is, which has resulted in vastly different models.
Up until now, though, significant weight has been attached to Imperial's model, which placed the fatality rate higher than others and predicted that 510,000 people in the UK could die without a lockdown.
It was said to have prompted a dramatic change in policy from the Government, causing businesses, schools and restaurants to be shuttered immediately in March. The Bank of England has predicted that the economy could take a year to return to normal, after facing its worst recession for more than three centuries.
The Imperial model works by using code to simulate transport links, population size, social networks and healthcare provisions to predict how coronavirus would spread. However, questions have since emerged over whether the model is accurate, after researchers released the code behind it, which in its original form was “thousands of lines” developed over more than 13 years.
In its initial form, developers claimed the code had been unreadable, with some parts looking “like they were machine translated from Fortran”, an old coding language, according to John Carmack, an American developer, who helped clean up the code before it was published online. Yet, the problems appear to go much deeper than messy coding.
Many have claimed that it is almost impossible to reproduce the same results from the same data, using the same code. Scientists from the University of Edinburgh reported such an issue, saying they got different results when they used different machines, and even in some cases, when they used the same machines.
“There appears to be a bug in either the creation or re-use of the network file. If we attempt two completely identical runs, only varying in that the second should use the network file produced by the first, the results are quite different,” the Edinburgh researchers wrote on the Github file.
After a discussion with one of the Github developers, a fix was later provided. This is said to be one of a number of bugs discovered within the system. The Github developers explained this by saying that the model is “stochastic”, and that “multiple runs with different seeds should be undertaken to see average behaviour”.
However, it has prompted questions from specialists, who say “models must be capable of passing the basic scientific test of producing the same results given the same initial set of parameters...otherwise, there is simply no way of knowing whether they will be reliable.”
It comes amid a wider debate over whether the Government should have relied more heavily on numerous models before making policy decisions.
Writing for telegraph.co.uk, Sir Nigel Shadbolt, Principal at Jesus College, said that “having a diverse variety of models, particularly those that enable policymakers to explore predictions under different assumptions, and with different interventions, is incredibly powerful”.
Like the Imperial code, a rival model by Professor Sunetra Gupta at Oxford University works on a so-called "SIR approach" in which the population is divided into those that are susceptible, infected and recorded. However, while Gupta made the assumption that 0.1pc of people infected with coronavirus would die, Ferguson placed that figure at 0.9pc.
That led to a dramatic reversal in government policy from attempting to build “herd immunity” to a full-on lockdown. Experts remain baffled as to why the government appeared to dismiss other models.
“We’d be up in arms if weather forecasting was based on a single set of results from a single model and missed taking that umbrella when it rained,” says Michael Bonsall, Professor of Mathematical Biology at Oxford University.
Concerns, in particular, over Ferguson’s model have been raised, with Konstantin Boudnik, vice-president of architecture at WANdisco, saying his track record in modelling doesn’t inspire confidence.
In the early 2000s, Ferguson’s models incorrectly predicted up to 136,000 deaths from mad cow disease, 200 million from bird flu and 65,000 from swine flu.
“The facts from the early 2000s are just yet another confirmation that their modeling approach was flawed to the core,” says Dr Boudnik. “We don't know for sure if the same model/code was used, but we clearly see their methodology wasn't rigourous then and surely hasn't improved now.”
A spokesperson for the Imperial College COVID19 Response Team said: “The UK Government has never relied on a single disease model to inform decision-making. As has been repeatedly stated, decision-making around lockdown was based on a consensus view of the scientific evidence, including several modelling studies by different academic groups.
“Multiple groups using different models concluded that the pandemic would overwhelm the NHS and cause unacceptably high mortality in the absence of extreme social distancing measures. Within the Imperial research team we use several models of differing levels of complexity, all of which produce consistent results. We are working with a number of legitimate academic groups and technology companies to develop, test and further document the simulation code referred to. However, we reject the partisan reviews of a few clearly ideologically motivated commentators.
“Epidemiology is not a branch of computer science and the conclusions around lockdown rely not on any mathematical model but on the scientific consensus that COVID-19 is a highly transmissible virus with an infection fatality ratio exceeding 0.5pc in the UK.”
 
Last edited:

buchansleftleg

Full Member
Joined
Aug 27, 2014
Messages
3,748
Location
Dublin, formerly Manchester

On the subject of Belgium, I haven’t actually been following what’s happened there (other than noticing their grim stats) but the citizens seem pretty pissed off with their government. Anyone know what they did wrong/differently to everyone else?
That's an amazingly powerful protest. Would love UK staff to do that to Boris at his next PR briefing.
 

Dumbstar

We got another woman hater here.
Joined
Jul 18, 2002
Messages
21,286
Location
Viva Karius!
Supports
Liverpool
90 hospital deaths in England today. 111 in the UK in total. Taking into account weekend reporting.
 

Hound Dog

Full Member
Joined
Mar 10, 2011
Messages
3,214
Location
Belgrade, Serbia
Supports
Whoever I bet on

Here’s a pretty good summary on what the next 18-24 months will look like without a vaccine. Osterholm knows his shit. Most likely scenarios are not great but not awful either.
Surely they meant 18-24 from the beginning? If so, we are already nearly a third/quarter in...
 

arnie_ni

Full Member
Joined
Apr 27, 2014
Messages
15,252
Neil Ferguson's Imperial model could be the most devastating software mistake of all time
David Richards and Konstantin Boudnik 16 May 2020 https://www.telegraph.co.uk/technol...ial-model-could-devastating-software-mistake/
In the history of expensive software mistakes, Mariner 1 was probably the most notorious. The unmanned spacecraft was destroyed seconds after launch from Cape Canaveral in 1962 when it veered dangerously off-course due to a line of dodgy code.
But nobody died and the only hits were to Nasa’s budget and pride. Imperial College’s modelling of non-pharmaceutical interventions for Covid-19 which helped persuade the UK and other countries to bring in draconian lockdowns will supersede the failed Venus space probe and could go down in history as the most devastating software mistake of all time, in terms of economic costs and lives lost.
Since publication of Imperial’s microsimulation model, those of us with a professional and personal interest in software development have studied the code on which policymakers based their fateful decision to mothball our multi-trillion pound economy and plunge millions of people into poverty and hardship. And we were profoundly disturbed at what we discovered. The model appears to be totally unreliable and you wouldn’t stake your life on it.
First though, a few words on our credentials. I am David Richards, founder and chief executive of WANdisco, a global leader in Big Data software that is jointly headquartered in Silicon Valley and Sheffield. My co-author is Dr Konstantin ‘Cos’ Boudnik, vice-president of architecture at WANdisco, author of 17 US patents in distributed computing and a veteran developer of the Apache Hadoop framework that allows computers to solve problems using vast amounts of data.
Imperial’s model appears to be based on a programming language called Fortran, which was old news 20 years ago and, guess what, was the code used for Mariner 1. This outdated language contains inherent problems with its grammar and the way it assigns values, which can give way to multiple design flaws and numerical inaccuracies. One file alone in the Imperial model contained 15,000 lines of code.
Try unravelling that tangled, buggy mess, which looks more like a bowl of angel hair pasta than a finely tuned piece of programming. Industry best practice would have 500 separate files instead. In our commercial reality, we would fire anyone for developing code like this and any business that relied on it to produce software for sale would likely go bust.
The approach ignores widely accepted computer science principles known as "separation of concerns", which date back to the early 70s and are essential to the design and architecture of successful software systems. The principles guard against what developers call CACE: Changing Anything Changes Everything.
Without this separation, it is impossible to carry out rigorous testing of individual parts to ensure full working order of the whole. Testing allows for guarantees. It is what you do on a conveyer belt in a car factory. Each and every component is tested for integrity in order to pass strict quality controls.
Only then is the car deemed safe to go on the road. As a result, Imperial’s model is vulnerable to producing wildly different and conflicting outputs based on the same initial set of parameters. Run it on different computers and you would likely get different results. In other words, it is non-deterministic.
As such, it is fundamentally unreliable. It screams the question as to why our Government did not get a second opinion before swallowing Imperial's prescription.
Ultimately, this is a computer science problem and where are the computer scientists in the room? Our leaders did not have the grounding in computer science to challenge the ideas and so were susceptible to the academics. I suspect the Government saw what was happening in Italy with its overwhelmed hospitals and panicked.
It chose a blunt instrument instead of a scalpel and now there is going to be a huge strain on society. Defenders of the Imperial model argue that because the problem - a global pandemic - is dynamic, then the solution should share the same stochastic, non-deterministic quality.
We disagree. Models must be capable of passing the basic scientific test of producing the same results given the same initial set of parameters. Otherwise, there is simply no way of knowing whether they will be reliable.
Indeed, many global industries successfully use deterministic models that factor in randomness. No surgeon would put a pacemaker into a cardiac patient knowing it was based on an arguably unpredictable approach for fear of jeopardising the Hippocratic oath. Why on earth would the Government place its trust in the same when the entire wellbeing of our nation is at stake?

David Richards, founder and chief executive of WANdisco and Dr Konstantin Boudnik is the company's vice-president of architecture



Coding that led to lockdown was 'totally unreliable' and a 'buggy mess', say experts
By Hannah Boland and Ellie Zolfagharifard 16 May 2020 https://www.telegraph.co.uk/technol...wn-totally-unreliable-buggy-mess-say-experts/
The Covid-19 modelling that sent Britain into lockdown, shutting the economy and leaving millions unemployed, has been slammed by a series of experts.
Professor Neil Ferguson's computer coding was derided as “totally unreliable” by leading figures, who warned it was “something you wouldn’t stake your life on".
The model, credited with forcing the Government to make a U-turn and introduce a nationwide lockdown, is a “buggy mess that looks more like a bowl of angel hair pasta than a finely tuned piece of programming”, says David Richards, co-founder of British data technology company WANdisco.
“In our commercial reality, we would fire anyone for developing code like this and any business that relied on it to produce software for sale would likely go bust.”
The comments are likely to reignite a row over whether the UK was right to send the public into lockdown, with conflicting scientific models having suggested people may have already acquired substantial herd immunity and that Covid-19 may have hit Britain earlier than first thought. Scientists have also been split on what the fatality rate of Covid-19 is, which has resulted in vastly different models.
Up until now, though, significant weight has been attached to Imperial's model, which placed the fatality rate higher than others and predicted that 510,000 people in the UK could die without a lockdown.
It was said to have prompted a dramatic change in policy from the Government, causing businesses, schools and restaurants to be shuttered immediately in March. The Bank of England has predicted that the economy could take a year to return to normal, after facing its worst recession for more than three centuries.
The Imperial model works by using code to simulate transport links, population size, social networks and healthcare provisions to predict how coronavirus would spread. However, questions have since emerged over whether the model is accurate, after researchers released the code behind it, which in its original form was “thousands of lines” developed over more than 13 years.
In its initial form, developers claimed the code had been unreadable, with some parts looking “like they were machine translated from Fortran”, an old coding language, according to John Carmack, an American developer, who helped clean up the code before it was published online. Yet, the problems appear to go much deeper than messy coding.
Many have claimed that it is almost impossible to reproduce the same results from the same data, using the same code. Scientists from the University of Edinburgh reported such an issue, saying they got different results when they used different machines, and even in some cases, when they used the same machines.
“There appears to be a bug in either the creation or re-use of the network file. If we attempt two completely identical runs, only varying in that the second should use the network file produced by the first, the results are quite different,” the Edinburgh researchers wrote on the Github file.
After a discussion with one of the Github developers, a fix was later provided. This is said to be one of a number of bugs discovered within the system. The Github developers explained this by saying that the model is “stochastic”, and that “multiple runs with different seeds should be undertaken to see average behaviour”.
However, it has prompted questions from specialists, who say “models must be capable of passing the basic scientific test of producing the same results given the same initial set of parameters...otherwise, there is simply no way of knowing whether they will be reliable.”
It comes amid a wider debate over whether the Government should have relied more heavily on numerous models before making policy decisions.
Writing for telegraph.co.uk, Sir Nigel Shadbolt, Principal at Jesus College, said that “having a diverse variety of models, particularly those that enable policymakers to explore predictions under different assumptions, and with different interventions, is incredibly powerful”.
Like the Imperial code, a rival model by Professor Sunetra Gupta at Oxford University works on a so-called "SIR approach" in which the population is divided into those that are susceptible, infected and recorded. However, while Gupta made the assumption that 0.1pc of people infected with coronavirus would die, Ferguson placed that figure at 0.9pc.
That led to a dramatic reversal in government policy from attempting to build “herd immunity” to a full-on lockdown. Experts remain baffled as to why the government appeared to dismiss other models.
“We’d be up in arms if weather forecasting was based on a single set of results from a single model and missed taking that umbrella when it rained,” says Michael Bonsall, Professor of Mathematical Biology at Oxford University.
Concerns, in particular, over Ferguson’s model have been raised, with Konstantin Boudnik, vice-president of architecture at WANdisco, saying his track record in modelling doesn’t inspire confidence.
In the early 2000s, Ferguson’s models incorrectly predicted up to 136,000 deaths from mad cow disease, 200 million from bird flu and 65,000 from swine flu.
“The facts from the early 2000s are just yet another confirmation that their modeling approach was flawed to the core,” says Dr Boudnik. “We don't know for sure if the same model/code was used, but we clearly see their methodology wasn't rigourous then and surely hasn't improved now.”
A spokesperson for the Imperial College COVID19 Response Team said: “The UK Government has never relied on a single disease model to inform decision-making. As has been repeatedly stated, decision-making around lockdown was based on a consensus view of the scientific evidence, including several modelling studies by different academic groups.
“Multiple groups using different models concluded that the pandemic would overwhelm the NHS and cause unacceptably high mortality in the absence of extreme social distancing measures. Within the Imperial research team we use several models of differing levels of complexity, all of which produce consistent results. We are working with a number of legitimate academic groups and technology companies to develop, test and further document the simulation code referred to. However, we reject the partisan reviews of a few clearly ideologically motivated commentators.
“Epidemiology is not a branch of computer science and the conclusions around lockdown rely not on any mathematical model but on the scientific consensus that COVID-19 is a highly transmissible virus with an infection fatality ratio exceeding 0.5pc in the UK.”
Does it matter at this stage? 10s of thousands of deaths with a lockdown, 10s of thousands more without.

Are they saying that would have been acceptable?
 

One Night Only

Prison Bitch #24604
Joined
Oct 16, 2009
Messages
30,878
Location
Westworld
Anyone watching bbc news right now?
Yeah I was, bloke was jumpy as hell, twitching constantly, weird as hell.

However I can see his point, people who don't feel vulnerable go about life as normal.

Those who do, stay locked down.

Anyone who catches the virus and dies, it was their own choice pretty much by not abiding to lockdown themselves when they felt vulnerable.

He also mentioned how it's not exactly as bad as it's made out with numbers compared to previous virus' and stuff. If he's right in what he is saying I can fully understand his viewpoint.

However, I don't know if he's telling the truth or just talking out his arse for his own views.

Allow people who feel vulnerable to self furlough effectively through their company.
 

Pexbo

Winner of the 'I'm not reading that' medal.
Joined
Jun 2, 2009
Messages
68,807
Location
Brizzle
Supports
Big Days
Neil Ferguson's Imperial model could be the most devastating software mistake of all time
David Richards and Konstantin Boudnik 16 May 2020 https://www.telegraph.co.uk/technol...ial-model-could-devastating-software-mistake/
In the history of expensive software mistakes, Mariner 1 was probably the most notorious. The unmanned spacecraft was destroyed seconds after launch from Cape Canaveral in 1962 when it veered dangerously off-course due to a line of dodgy code.
But nobody died and the only hits were to Nasa’s budget and pride. Imperial College’s modelling of non-pharmaceutical interventions for Covid-19 which helped persuade the UK and other countries to bring in draconian lockdowns will supersede the failed Venus space probe and could go down in history as the most devastating software mistake of all time, in terms of economic costs and lives lost.
Since publication of Imperial’s microsimulation model, those of us with a professional and personal interest in software development have studied the code on which policymakers based their fateful decision to mothball our multi-trillion pound economy and plunge millions of people into poverty and hardship. And we were profoundly disturbed at what we discovered. The model appears to be totally unreliable and you wouldn’t stake your life on it.
First though, a few words on our credentials. I am David Richards, founder and chief executive of WANdisco, a global leader in Big Data software that is jointly headquartered in Silicon Valley and Sheffield. My co-author is Dr Konstantin ‘Cos’ Boudnik, vice-president of architecture at WANdisco, author of 17 US patents in distributed computing and a veteran developer of the Apache Hadoop framework that allows computers to solve problems using vast amounts of data.
Imperial’s model appears to be based on a programming language called Fortran, which was old news 20 years ago and, guess what, was the code used for Mariner 1. This outdated language contains inherent problems with its grammar and the way it assigns values, which can give way to multiple design flaws and numerical inaccuracies. One file alone in the Imperial model contained 15,000 lines of code.
Try unravelling that tangled, buggy mess, which looks more like a bowl of angel hair pasta than a finely tuned piece of programming. Industry best practice would have 500 separate files instead. In our commercial reality, we would fire anyone for developing code like this and any business that relied on it to produce software for sale would likely go bust.
The approach ignores widely accepted computer science principles known as "separation of concerns", which date back to the early 70s and are essential to the design and architecture of successful software systems. The principles guard against what developers call CACE: Changing Anything Changes Everything.
Without this separation, it is impossible to carry out rigorous testing of individual parts to ensure full working order of the whole. Testing allows for guarantees. It is what you do on a conveyer belt in a car factory. Each and every component is tested for integrity in order to pass strict quality controls.
Only then is the car deemed safe to go on the road. As a result, Imperial’s model is vulnerable to producing wildly different and conflicting outputs based on the same initial set of parameters. Run it on different computers and you would likely get different results. In other words, it is non-deterministic.
As such, it is fundamentally unreliable. It screams the question as to why our Government did not get a second opinion before swallowing Imperial's prescription.
Ultimately, this is a computer science problem and where are the computer scientists in the room? Our leaders did not have the grounding in computer science to challenge the ideas and so were susceptible to the academics. I suspect the Government saw what was happening in Italy with its overwhelmed hospitals and panicked.
It chose a blunt instrument instead of a scalpel and now there is going to be a huge strain on society. Defenders of the Imperial model argue that because the problem - a global pandemic - is dynamic, then the solution should share the same stochastic, non-deterministic quality.
We disagree. Models must be capable of passing the basic scientific test of producing the same results given the same initial set of parameters. Otherwise, there is simply no way of knowing whether they will be reliable.
Indeed, many global industries successfully use deterministic models that factor in randomness. No surgeon would put a pacemaker into a cardiac patient knowing it was based on an arguably unpredictable approach for fear of jeopardising the Hippocratic oath. Why on earth would the Government place its trust in the same when the entire wellbeing of our nation is at stake?

David Richards, founder and chief executive of WANdisco and Dr Konstantin Boudnik is the company's vice-president of architecture



Coding that led to lockdown was 'totally unreliable' and a 'buggy mess', say experts
By Hannah Boland and Ellie Zolfagharifard 16 May 2020 https://www.telegraph.co.uk/technol...wn-totally-unreliable-buggy-mess-say-experts/
The Covid-19 modelling that sent Britain into lockdown, shutting the economy and leaving millions unemployed, has been slammed by a series of experts.
Professor Neil Ferguson's computer coding was derided as “totally unreliable” by leading figures, who warned it was “something you wouldn’t stake your life on".
The model, credited with forcing the Government to make a U-turn and introduce a nationwide lockdown, is a “buggy mess that looks more like a bowl of angel hair pasta than a finely tuned piece of programming”, says David Richards, co-founder of British data technology company WANdisco.
“In our commercial reality, we would fire anyone for developing code like this and any business that relied on it to produce software for sale would likely go bust.”
The comments are likely to reignite a row over whether the UK was right to send the public into lockdown, with conflicting scientific models having suggested people may have already acquired substantial herd immunity and that Covid-19 may have hit Britain earlier than first thought. Scientists have also been split on what the fatality rate of Covid-19 is, which has resulted in vastly different models.
Up until now, though, significant weight has been attached to Imperial's model, which placed the fatality rate higher than others and predicted that 510,000 people in the UK could die without a lockdown.
It was said to have prompted a dramatic change in policy from the Government, causing businesses, schools and restaurants to be shuttered immediately in March. The Bank of England has predicted that the economy could take a year to return to normal, after facing its worst recession for more than three centuries.
The Imperial model works by using code to simulate transport links, population size, social networks and healthcare provisions to predict how coronavirus would spread. However, questions have since emerged over whether the model is accurate, after researchers released the code behind it, which in its original form was “thousands of lines” developed over more than 13 years.
In its initial form, developers claimed the code had been unreadable, with some parts looking “like they were machine translated from Fortran”, an old coding language, according to John Carmack, an American developer, who helped clean up the code before it was published online. Yet, the problems appear to go much deeper than messy coding.
Many have claimed that it is almost impossible to reproduce the same results from the same data, using the same code. Scientists from the University of Edinburgh reported such an issue, saying they got different results when they used different machines, and even in some cases, when they used the same machines.
“There appears to be a bug in either the creation or re-use of the network file. If we attempt two completely identical runs, only varying in that the second should use the network file produced by the first, the results are quite different,” the Edinburgh researchers wrote on the Github file.
After a discussion with one of the Github developers, a fix was later provided. This is said to be one of a number of bugs discovered within the system. The Github developers explained this by saying that the model is “stochastic”, and that “multiple runs with different seeds should be undertaken to see average behaviour”.
However, it has prompted questions from specialists, who say “models must be capable of passing the basic scientific test of producing the same results given the same initial set of parameters...otherwise, there is simply no way of knowing whether they will be reliable.”
It comes amid a wider debate over whether the Government should have relied more heavily on numerous models before making policy decisions.
Writing for telegraph.co.uk, Sir Nigel Shadbolt, Principal at Jesus College, said that “having a diverse variety of models, particularly those that enable policymakers to explore predictions under different assumptions, and with different interventions, is incredibly powerful”.
Like the Imperial code, a rival model by Professor Sunetra Gupta at Oxford University works on a so-called "SIR approach" in which the population is divided into those that are susceptible, infected and recorded. However, while Gupta made the assumption that 0.1pc of people infected with coronavirus would die, Ferguson placed that figure at 0.9pc.
That led to a dramatic reversal in government policy from attempting to build “herd immunity” to a full-on lockdown. Experts remain baffled as to why the government appeared to dismiss other models.
“We’d be up in arms if weather forecasting was based on a single set of results from a single model and missed taking that umbrella when it rained,” says Michael Bonsall, Professor of Mathematical Biology at Oxford University.
Concerns, in particular, over Ferguson’s model have been raised, with Konstantin Boudnik, vice-president of architecture at WANdisco, saying his track record in modelling doesn’t inspire confidence.
In the early 2000s, Ferguson’s models incorrectly predicted up to 136,000 deaths from mad cow disease, 200 million from bird flu and 65,000 from swine flu.
“The facts from the early 2000s are just yet another confirmation that their modeling approach was flawed to the core,” says Dr Boudnik. “We don't know for sure if the same model/code was used, but we clearly see their methodology wasn't rigourous then and surely hasn't improved now.”
A spokesperson for the Imperial College COVID19 Response Team said: “The UK Government has never relied on a single disease model to inform decision-making. As has been repeatedly stated, decision-making around lockdown was based on a consensus view of the scientific evidence, including several modelling studies by different academic groups.
“Multiple groups using different models concluded that the pandemic would overwhelm the NHS and cause unacceptably high mortality in the absence of extreme social distancing measures. Within the Imperial research team we use several models of differing levels of complexity, all of which produce consistent results. We are working with a number of legitimate academic groups and technology companies to develop, test and further document the simulation code referred to. However, we reject the partisan reviews of a few clearly ideologically motivated commentators.
“Epidemiology is not a branch of computer science and the conclusions around lockdown rely not on any mathematical model but on the scientific consensus that COVID-19 is a highly transmissible virus with an infection fatality ratio exceeding 0.5pc in the UK.”
If they are going to only ever trust code which produces the same set of results given the same data, on the same machines (irrelevant), they’re going to have serious trouble adopting machine learning techniques in future.

I can very well believe it’s shit code and has a whole number of problems but not accepting it because it’s stochastic is stupid as hell given the fact that the real world scenarios it is attempting to model are infinitely stochastic. The fact it is stochastic makes it more useful rather than less as you can learn information from it’s predictable behaviour rather than its predictable outputs.
 

africanspur

Full Member
Joined
Sep 1, 2010
Messages
9,225
Supports
Tottenham Hotspur
Anyone watching bbc news right now?
The interview with Lord Sumption?

Bit of a train wreck of an interview imo. Some good points imo but his point is fundamentally ruined by a misapprehension of the facts.

Sadly, the presenter did not know enough to challenge him on some of the false things he said.
 

onemanarmy

Full Member
Joined
Jul 18, 2013
Messages
4,717
Location
Belgium

On the subject of Belgium, I haven’t actually been following what’s happened there (other than noticing their grim stats) but the citizens seem pretty pissed off with their government. Anyone know what they did wrong/differently to everyone else?
I'm not going in detail as why, but you shouldn't repost tweets by Tom Van Grieken. He's the leader of an extremist far right party with an obvious anti government agenda. They are anti Islam, anti migrants, anti culture... well, anti against a lot.

Belgian people in my opinion are pretty ok with how the government has handled this crisis. Sure there were feckups and hospital staff are underpaid, but in general it's been ok. This was a one time event, that seems to be orchestrated by a union.
 

berbatrick

Renaissance Man
Scout
Joined
Oct 22, 2010
Messages
21,787
Two Coasts. One Virus. How New York Suffered Nearly 10 Times the Number of Deaths as California.
California’s governor and San Francisco’s mayor worked together to act early in confronting the COVID threat. For Andrew Cuomo and Bill de Blasio, it was a different story, and 27,000 New Yorkers have died so far.

devastating article. key points:
In San Francisco, Breed cleaned up her language in a text to California Gov. Gavin Newsom. But she was no less emphatic: The city needed to be closed. Newsom had once been San Francisco’s mayor, and he had appointed Breed to lead the city’s Fire Commission in 2010.

Newsom responded immediately, saying she should coordinate with the counties surrounding San Francisco as they too were moving toward a shutdown. Breed said she spoke to representatives of those counties on March 15 and their public health officials were prepared to make the announcement on their own. On March 16, with just under 40 cases of COVID-19 in San Francisco and no deaths, Breed issued the order banning all but essential movement and interaction.

“I really feel like we didn’t have a lot of good options,” Breed said.
...
Breed, it turns out, had sent de Blasio a copy of her detailed shelter-in-place order. She thought New York might benefit from it.

New York Gov. Andrew Cuomo, however, reacted to de Blasio’s idea for closing down New York City with derision. It was dangerous, he said, and served only to scare people. Language mattered, Cuomo said, and “shelter-in-place” sounded like it was a response to a nuclear apocalypse.

Moreover, Cuomo said, he alone had the power to order such a measure.

“No city in the state can quarantine itself without state approval,” Cuomo said of de Blasio’s call for a shelter-in-place order. “I have no plan whatsoever to quarantine any city.”
Cuomo’s conviction didn’t last. On March 22, he, too, shuttered his state. The action came six days after San Francisco had shut down, five days after de Blasio suggested doing similarly and three days after all of California had been closed by Newsom. By then, New York faced a raging epidemic, with the number of confirmed cases at 15,000 doubling every three or four days.


While New York’s formal pandemic response plan underscores the need for seamless communication between state and local officials, the state Health Department broke off routine sharing of information and strategy with its city counterpart in February, just as the size of the menace was becoming clearer, according to both a city official and a city employee. “Radio silence,” said the city official. To this day, the city employee said, the city can’t always get basic data from the state, such as counts of ventilators at hospitals or nursing home staff. “It’s like they have been ordered not to talk to us,” the person said.

For his part, de Blasio spent critical weeks spurning his own Health Department’s increasingly urgent belief that trying to contain the spread of the virus was a fool’s errand. The clear need, as early as late February, was to move to an all-out effort at not being overrun by the disease, which meant closing things down and restricting people’s movements. The frustration within the department grew so intense, according to one city official, plans were discussed to undertake a formal “resistance”; the department would do what needed to be done, the mayor’s directives be damned.
There's a lot more, both de Blasio and Cuomo come off badly, Cumo probably worse.

https://www.propublica.org/article/...y-10-times-the-number-of-deaths-as-california
 

sammsky1

Pochettino's #1 fan
Joined
Feb 10, 2008
Messages
32,841
Location
London
If they are going to only ever trust code which produces the same set of results given the same data, on the same machines (irrelevant), they’re going to have serious trouble adopting machine learning techniques in future.

I can very well believe it’s shit code and has a whole number of problems but not accepting it because it’s stochastic is stupid as hell given the fact that the real world scenarios it is attempting to model are infinitely stochastic. The fact it is stochastic makes it more useful rather than less as you can learn information from it’s predictable behaviour rather than its predictable outputs.
 

Wolverine

Full Member
Joined
Jun 8, 2004
Messages
2,449
Location
UK
Mentioning 30mill vaccines by September if trials go well? Wtf? Would be nice.
Yup, that's a lot of ifs and buts there. Essentially AstraZeneca would assist in the manufacturing of the doses of the vaccine should it prove to be effective

They've got more details here
http://www.ox.ac.uk/news/2020-04-23-oxford-covid-19-vaccine-begins-human-trial-stage

Essentially ironically if we hit a second peak or substantially increased community transmission it'll be the only way to really just vaccine efficacy over a shorter timeframe for the clinical endpoints they want to assess
 

djembatheking

Full Member
Joined
Feb 7, 2013
Messages
4,084
It seems a lot of places are easing restrictions massively these last few days , are we over the worse of this?
 

Revan

Assumptionman
Joined
Dec 19, 2011
Messages
49,891
Location
London
I get your point, but smallpox is one of those diseases vaccination has managed to completely eradicate.
Sure, after a few thousand years of causing havoc all over the world, destroying the Aztec civilization, and after more than a hundred of years the vaccine for it was developed (actually, the first ever vaccine was for smallpox).