Subscribe to R bloggers feed R bloggers
R news and tutorials contributed by hundreds of R bloggers
Updated: 23 min 52 sec ago

Superspreading and the Gini Coefficient

Sun, 05/31/2020 - 00:00

[This article was first published on Theory meets practice..., and kindly contributed to R-bloggers]. (You can report issue about the content on this page here) Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Abstract:

We look at superspreading in infectious disease transmission from a statistical point of view. We characterise heterogeneity in the offspring distribution by the Gini coefficient instead of the usual dispersion parameter of the negative binomial distribution. This allows us to consider more flexible offspring distributions.



This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. The markdown+Rknitr source code of this blog is available under a GNU General Public License (GPL v3) license from github.

Motivation

The recent Science report on Superspreading during the COVID-19 pandemic by Kai Kupferschmidt has made the dispersion parameter \(k\) of the negative binomial distribution a hot quantity1 in the discussions of how to determine effective interventions. This short blog post aims at understanding the math behind statements such as “Probably about 10% of cases lead to 80% of the spread” and replicate them with computations in R.

Warning: This post reflects more my own learning process of what is superspreading than trying to make any statements of importance.

Superspreading

Lloyd-Smith et al. (2005) show that the 2002-2004 SARS-CoV-1 epidemic was driven by a small number of events where one case directly infected a large number of secondary cases – a so called superspreading event. This means that for SARS-CoV-1 the distribution of how many secondary cases each primary case generates is heavy tailed. More specifically, the effective reproduction number describes the mean number of secondary cases a primary case generates during the outbreak, i.e. it is the mean of the offspring distribution. In order to address dispersion around this mean, Lloyd-Smith et al. (2005) use the negative binomial distribution with mean \(R(t)\) and over-dispersion parameter \(k\) as a probability model for the offspring distribution. The number of offspring that case \(i\), which got infected at time \(t_i\), causes is given by \[
Y_{i} \sim \operatorname{NegBin}(R(t_i), k),
\] s.t. \(\operatorname{E}(Y_{i}) = R(t_i)\) and \(\operatorname{Var}(Y_{i}) = R(t_i) (1 + \frac{1}{k} R(t_i))\). This parametrisation makes it easy to see that the negative binomial model has an additional factor \(1 + \frac{1}{k} R(t_i)\) for the variance, which allows it to have excess variance (aka. over-dispersion) compared to the Poisson distribution, which has \(\operatorname{Var}(Y_{i}) = R(t_i)\). If \(k\rightarrow \infty\) we get the Poisson distribution and the closer \(k\) is to zero the larger the variance, i.e. the heterogeneity, in the distribution is. Note the deliberate use of the effective reproduction number \(R(t_i)\) instead of the basic reproduction number \(R_0\) (as done in Lloyd-Smith et al. (2005)) in the model. This is to highlight, that one is likely to observe clusters in the context of interventions and depletion of susceptibles.

That the dispersion parameter \(k\) is making epidemiological fame is a little surprising, because it is a parameter in a specific parametric model. A parametric model, which might be inadequate for the observed data. A secondary objective of this post is thus to focus more on describing the heterogeneity of the offspring distribution using classical statistical concepts such as the Gini coefficient.

Negative binomial distributed number of secondary cases

Let’s assume \(k=0.45\) as done in Adam et al. (2020). This is a slightly higher estimate than the \(k=0.1\) estimate by Endo et al. (2020)2 quoted in the Science article. We want to derive statements like “the x% most active spreaders infected y% of all cases” as a function of \(k\). The PMF of the offspring distribution with mean 2.5 and dispersion 0.45 looks as follows:

Rt <- 2.5 k <- 0.45 # Evaluate on a larger enough grid, so E(Y_t) is determined accurate enough # We also include -1 in the grid to get a point (0,0) needed for the Lorenz curve df <- data.frame(x=-1:250) %>% mutate(pmf= dnbinom(x, mu=Rt, size=k))

So we observe that 43% of the cases never manage to infect a secondary case, whereas some cases manage to generate more than 10 new cases. The mean of the distribution is checked empirically to equal the specified \(R(t)\) of 2.5:

sum(df$x * df$pmf) ## [1] 2.5

Lloyd-Smith et al. (2005) define a superspreader to be a primary case, which generates more secondary cases than the 99th quantile of the Poisson distribution with mean \(R(t)\). We use this to compute the proportion of superspreaders in our distribution:

(superspreader_threshold <- qpois(0.99, lambda=Rt)) ## [1] 7 (p_superspreader <- pnbinom(superspreader_threshold, mu=Rt, size=k, lower.tail=FALSE)) ## [1] 0.09539277

So 10% of the cases will generate more than 7 new cases. To get to statements such as “10% generate 80% of the cases” we also need to know how many cases those 10% generate out of the 2.5 average.

# Compute proportion of the overall expected number of new cases df <- df %>% mutate(cdf = pnbinom(x, mu=Rt, size=k), expected_cases=x*pmf, prop_of_Rt=expected_cases/Rt, cum_prop_of_Rt = cumsum(prop_of_Rt)) # Summarise info <- df %>% filter(x > superspreader_threshold) %>% summarise(expected_cases = sum(expected_cases), prop_of_Rt = sum(prop_of_Rt)) info ## expected_cases prop_of_Rt ## 1 1.192786 0.4771144

In other words, the superspreaders generate (on average) 1.19 of the 2.5 new cases of a generation, i.e. 48%.

These statements can also be made without formulating a superspreader threshold by graphing the cumulative share of the distribution of primary cases against the cumulative share of secondary cases these generate. This is exactly what the Lorenz curve is doing. However, for outbreak analysis it appears clearer to graph the cumulative distribution in decreasing order of the number of offspring, i.e. following Lloyd-Smith et al. (2005) we plot the cumulative share as \(P(Y\geq y)\) instead of \(P(Y \leq y)\). This is a variation of the Lorenz curve, but allows statements such as “the %x cases with highest number of offspring generate %y of the secondary cases”.

# Add information for plotting the modified Lorenz curve df <- df %>% mutate(cdf_decreasing = pnbinom(x-1, mu=Rt, size=k, lower.tail=FALSE)) %>% arrange(desc(x)) %>% mutate(cum_prop_of_Rt_decreasing = cumsum(prop_of_Rt)) # Plot the modified Lorenz curve as in Fig 1b of Lloyd-Smith et al. (2005) ggplot(df, aes(x=cdf_decreasing, y=cum_prop_of_Rt_decreasing)) + geom_line() + coord_cartesian(xlim=c(0,1)) + xlab("Proportion of the infectious cases (cases with most secondary cases first)") + ylab("Proportion of the secondary cases") + scale_x_continuous(labels=scales::percent, breaks=seq(0,1,length=6)) + scale_y_continuous(labels=scales::percent, breaks=seq(0,1,length=6)) + geom_line(data=data.frame(x=seq(0,1,length=100)) %>% mutate(y=x), aes(x=x, y=y), lty=2, col="gray") + ggtitle(str_c("Scenario: R(t) = ", Rt, ", k = ", k))

Using the standard formulas to compute the Gini coefficient for a discrete distribution with support on the non-negative integers, i.e.  \[
G = \frac{1}{2\mu} \sum_{y=0}^\infty \sum_{z=0}^\infty f(y) f(z) |y-z|,
\] where \(f(y)\), \(y=0,1,\ldots\) denotes the PMF of the distribution and \(\mu=\sum_{y=0}^\infty y f(y)\) is the mean of the distribution. In our case \(\mu=R(t)\). From this we get

# Gini index for a discrete probability distribution gini_coeff <- function(df) { mu <- sum(df$x * df$pmf) sum <- 0 for (i in 1:nrow(df)) { for (j in 1:nrow(df)) { sum <- sum + df$pmf[i] * df$pmf[j] * abs(df$x[i] - df$x[j]) } } return(sum/(2*mu)) } gini_coeff(df) ## [1] 0.704049

A plot of the relationship between the dispersion parameter and the Gini index, given a fixed value of \(R(t)=2.5\), looks as follows

We see that the Gini index converges from above to the Gini index of the Poisson distribution with mean \(R(t)\). In our case this limit is

gini_coeff( data.frame(x=0:250) %>% mutate(pmf = dpois(x, lambda=Rt))) ## [1] 0.3475131 Red Marble Toy Example

For the toy example offspring distribution used by Christian Drosten in his Coronavirus Update podcast episode 44 on COVID-19 superspreading (in German). The described hypothetical scenario is translated to an offspring distribution, where a primary case either generates 1 (with probability 9/10) or 10 (with probability 1/10) secondary cases:

# Offspring distribution df_toyoffspring <- data.frame( x=c(1,10), pmf=c(9/10, 1/10)) # Hypothetical outbreak with 10000 cases from this offspring distribution y_obs <- sample(df_toyoffspring$x, size=10000, replace=TRUE, prob=df_toyoffspring$pmf) # Fit the negative binomial distribution to the observed offspring distribution # Note It would be better to fit the PMF directly instead of to the hypothetical # outbreak data (fit <- MASS::fitdistr(y_obs, "negative binomial")) ## size mu ## 1.69483494 1.90263640 ## (0.03724779) (0.02009563) # Note: different parametrisation of the k parameter (k.hat <- 1/fit$estimate["size"]) ## size ## 0.590028

In other words, when fitting a negative binomial distribution to these data (probably not a good idea) we get a dispersion parameter of 0.59.

The Gini coefficient allows for a more sensible description for offspring distributions, which are clearly not negative-binomial.

gini_coeff(df_toyoffspring) ## [1] 0.4263158 Discussion

The effect of superspreaders underlines the stochastic nature of the dynamics of an person-to-person transmitted disease in a population. The dispersion parameter \(k\) is conditional on the assumption of a given parametric model for the offspring distribution (negative binomial). The Gini index is an alternative characterisation to measure heterogeneity. However, in both cases the parameters are to be interpreted together with the expectation of the distribution. Estimation of the dispersion parameter is orthogonal to the mean in the negative binomial and its straightforward to also get confidence intervals for it. This is less straightforward for the Gini index.

A heavy tailed offspring distribution can make the disease easier to control by targeting intervention measures to restrict superspreading (Lloyd-Smith et al. 2005). The hope is that such interventions are “cheaper” than interventions which target the entire population of infectious contacts. However, the success of such a targeted strategy also depends on how large the contribution of superspreaders really is. Hence, some effort is needed to quantify the effect of superspreaders. Furthermore, the above treatment also underlines that heterogeneity can be a helpful feature to exploit when trying to control a disease. Another aspect of such heterogeneity, namely its influence on the threshold of herd immunity, has recently been invested by my colleagues at Stockholm University (Britton, Ball, and Trapman 2020).

Literature

Adam, DC, P Wu, J Wong, E Lau, T Tsang, S Cauchemez, G Leung, and B Cowling. 2020. “Clustering and Superspreading Potential of Severe Acute Respiratory Syndrome Coronavirus 2 (Sars-Cov-2) Infections in Hong Kong.” Research Square. https://doi.org/10.21203/rs.3.rs-29548/v1.

Britton, T, F Ball, and P Trapman. 2020. “The Disease-Induced Herd Immunity Level for Covid-19 Is Substantially Lower Than the Classical Herd Immunity Level.” https://arxiv.org/abs/2005.03085.

Endo, A, Centre for the Mathematical Modelling of Infectious Diseases COVID-19 Working Group, S Abbott, AJ Kucharski, and S Funk. 2020. “Estimating the Overdispersion in Covid-19 Transmission Using Outbreak Sizes Outside China [Version 1; Peer Review: 1 Approved, 1 Approved with Reservations].” Wellcome Open Res. https://doi.org/10.12688/wellcomeopenres.15842.1.

Lloyd-Smith, J. O., S. J. Schreiber, P. E. Kopp, and W. M. Getz. 2005. “Superspreading and the Effect of Individual Variation on Disease Emergence.” Nature 438 (7066): 355–59. https://doi.org/10.1038/nature04153.

  1. To be added to the list of characterising quantities such as doubling time, reproduction number, generation time, serial interval, …

  2. Lloyd-Smith et al. (2005) estimated \(k=0.16\) for SARS-CoV-1.

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

To leave a comment for the author, please follow the link and comment on their blog: Theory meets practice.... R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job. Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Mimic Excel’s Conditional Formatting in R

Sat, 05/30/2020 - 23:14

[This article was first published on triKnowBits, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here) Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

The DT package is an interface between R and the JavaScript DataTables library (RStudio DT documentation). In Example 3 (at this page) they show how to heatmap-format a table. This post modifies the example to

  1. format each column individually
  2. shade in green rather than red
  3. use base R syntax rather than piping
  4. omit the extra accoutrements of the displayed table (from the answer to this stackoverflow post), except
  5. include a title.
Here we generate data similar to that in Example 3, but with average values growing by column set.seed(12345)
df = as.data.frame(
  cbind(round(rnorm(10, mean = 0), 3), 
  round(rnorm(10, mean = 4), 3), 
  round(rnorm(10, mean = 8), 3), 
  round(rnorm(10, mean = 16), 3), 
  round(rnorm(10, mean = 32), 3), 
  sample(0:1, 10, TRUE))) Using the code in the example — modified to green — the darker values naturally appear in columns V4 and V5.

But that’s not what we want. For each column to have it’s own scale, simply apply RStudio’s algorithm to each column of df in a loop. The trick to notice is that formatStyle wants a datatable object as its first argument, and produces a datatable object as its result. Therefore, start off with a plain-Jane datatable and successively format each column, saving the result each time. Almost like building a ggplot. At the end, view the final result. # Start with a (relatively) plain, unformatted datatable object
dt <- DT::datatable(df, 
                    options = list(dom = 't', ordering = FALSE),
                    caption = "Example 3 By Column")
# Loop through the columns formatting according to that column's distribution
for (j in seq_along(df)) {
  # Create breaks for shading column values high to low
brks <- stats::quantile(x <- df[[j]], probs = seq(.05, .95, .05), na.rm = TRUE)
# Create shades of green for backgrounds
y <- round(seq(255, 40, length.out = length(brks) + 1), 0)
clrs <- paste0("rgb(", y, ", 255,", y, ")")
# Format cells in j-th column
dt <- DT::formatStyle(dt, j, backgroundColor = DT::styleInterval(brks, clrs))
}
dt Actuaries in the crowd might recognize the image at the top of the post as the table of link ratios from the GenIns dataset in the ChainLadder package. There do not appear to be any distinctive trends in the ratios by age.

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

To leave a comment for the author, please follow the link and comment on their blog: triKnowBits. R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job. Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

drat 0.1.6: Rewritten macOS binary support

Sat, 05/30/2020 - 21:01

[This article was first published on Thinking inside the box , and kindly contributed to R-bloggers]. (You can report issue about the content on this page here) Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

A new version of drat arrived on CRAN overnight, once again taking advantage of the fully automated process available for such packages with few reverse depends and no open issues. As we remarked at the last release fourteen months ago when we scored the same nice outcome: Being a simple package can have its upsides…

This release is mostly the work of Felix Ernst who took on what became a rewrite of how binary macOS packages are handled. If you need to distribute binary packages for macOS users, this may help. Two more small updates were made, see below for full details.

drat stands for drat R Archive Template, and helps with easy-to-create and easy-to-use repositories for R packages. Since its inception in early 2015 it has found reasonably widespread adoption among R users because repositories with marked releases is the better way to distribute code.

As your mother told you: Friends don’t let friends install random git commit snapshots. Rolled-up releases it is. drat is easy to use, documented by five vignettes and just works.

The NEWS file summarises the release as follows:

Changes in drat version 0.1.6 (2020-05-29)
  • Changes in drat functionality

    • Support for the various (current) macOS binary formats was rewritten (Felix Ernst in #89 fixing #88).

    • Travis CI use was updated to R 4.0.0 and bionic (Dirk).

    • A drat repo was added to the README (Thomas Fuller in #86)

Courtesy of CRANberries, there is a comparison to the previous release. More detailed information is on the drat page.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

To leave a comment for the author, please follow the link and comment on their blog: Thinking inside the box . R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job. Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Don’t Feel Guilty About Selecting Variables

Sat, 05/30/2020 - 20:21

[This article was first published on R – Win-Vector Blog, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here) Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

We have an exciting new article to share: Don’t Feel Guilty About Selecting Variables.

If you are at all interested in the probabilistic justification of important data science techniques, such as variable selection or pruning, this should be an informative and fun read.

“Data Science” is often criticized with the common slur “if it has science in the name it isn’t a science.” Data science is in fact a science for the following reason: it has empirical content. That is, there are methods that are used because we can confirm they work.

However, data science when done well also has a mathematical basis. We expect to find good mathematical, probabilistic, or statistical justification for reliable procedures.

Variable pruning or selection is one such procedure. It is well known that it can in fact improve data science results. It is an empirical fact or experience: for some datasets, for some fitting procedures explicit prior variable selection improves results. Our new note examines how this is not a mere empirical alchemy, but something that is mathematically justified and to be expected (under an appropriate Bayesian formulation of model fitting).

So please read on and also share: Don’t Feel Guilty About Selecting Variables, or How I Learned to Stop Worrying and Love Variable Selection.

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

To leave a comment for the author, please follow the link and comment on their blog: R – Win-Vector Blog. R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job. Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Charting the CMV Awareness Gap

Sat, 05/30/2020 - 02:00

[This article was first published on Artful Analytics, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here) Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Sometimes it’s okay to use a secondary axis

Introduction

Speaking of viruses, did you know that June is National Cytomegalovirus
(CMV) Awareness Month
?
Probably not, since most people have never heard of CMV (hence the need
for a national awareness month).

CMV is a common virus that infects 50-80% of people by the time they are
40 years old. In most cases, it’s not a big deal. But if a pregnant
woman becomes infected, she can pass the virus to the unborn child,
which results in a congenital infection about 33% of the time.

Congenital CMV (cCMV) is the number one viral cause of birth defects in
children. According to National CMV
Foundation
, 1 in 200 children
are born with CMV every year. That’s roughly 30,000 children. About 1 in
5 children born with CMV infection will have moderate to severe health
problems including:

  • Hearing loss
  • Vision loss
  • Feeding issues
  • Intellectual disability
  • Microcephaly (small head or brain)
  • Cerebral Palsy
  • Seizures

Outcomes associated
with congenital CMV are wide-ranging and unpredictable.

Despite how common and potentially damaging CMV is, research shows that
only 9% of women have heard of the condition.

Awareness = prevention

Our son
Gideon

was born with congenital CMV in 2013. Like most parents, we had never
heard of cCMV until our son was diagnosed.

Because cCMV is a viral infection, it is potentially preventable during
pregnancy if you know to take certain basic
precautions
.
However, knowing to take precations requires having heard of the
condition in the first place
, which brings us back to the need for a
National CMV Awareness Month.

One of the main tactics used in CMV awareness raising efforts is to
highlight the “awareness gap” between how few women have heard of CMV
and how many children are disabled by the condition each year.

In the past, the National CMV Foundation has used the graphic below for
this purpose (Fig. 1). It nicely shows levels of awareness vs incidence
of various congenital conditions in the US, based on data from Doutre
et al. (2016)
.


Fig. 1

Recently, I was asked by the Foundation to revise this graphic to
enhance its effectiveness (not coincidentally, my
wife
is the Chair of the Scientific
Advisory Committee).

In this post, I describe my approach using
ggplot2, as well as
cowplot and
related pacakges in R.

Mind the gap

Technically speaking, Fig. 1 is what you would call a bi-directional,
mirrored, diverging, or back-to-back bar chart. It is reminiscent of
pyramid style bar
charts often used to visualize population age distributions.

I suspect that when people see Fig. 1 they have a perceptual tendency to
sum the bars together rather than take the difference between each bar.
The former is typically how a bi-directional bar chart would be
interpreted. But since the purpose of the visualization is to highlight
the CMV awareness gap, it might be better to actually plot the gap
(linear distance) between awareness and incidence of long-term health problems for cCMV in comparison
to other conditions.

So my proposed enhancement is to layer the incidence data as a series of
dots on top of an ordered bar chart representing increasing awareness on
the x-axis, and use a secondary x-axis for incidence. Layering in this
way will create a visually salient gap between awareness and incidence
for cCMV at the top of the chart, which I can further highlight with
some text annotations.

Secondary axis (of evil?)

Early versions of {ggplot2} did not include the ability to add a
secondary axis because Hadley
Wickham
believed (and probably still
believes) that using a separate, secondary axis is a fundamentally
flawed

approach.

However, more recent versions of the package have included this
functionality with the sec_axis() function described
here. I think
we can assume from the addition of this functionality that Hadley isn’t
completely averse to the use of a secondary axis in some situations when
used with caution.

Again, my rationale for using a secondary x-axis in this case is to
achieve a specific perceptual effect, to higlight the gap between cCMV
awareness and incidence of disability visually so that people viewing the chart will
say “Wow! That’s some big gap.” And I think I can achieve this without
being manipulative or misleading, becuase the gap really is quite big.

Without further ado…

Here’s how the chart looks (Fig. 2). You can download a high
resolution version here.


Fig. 2

And here’s the R code that produces the chart.

library(tidyverse) library(cowplot) library(ggtext) library(magick) # Get data from Doutre et al. df <- tribble( ~condition, ~awareness, ~incidence, "Congenital Cytomegalovirus (CMV)", 6.7, 6000, "Congenital Toxoplasmosis", 8.53, 400, "Congenital Rubella Syndrome", 13.27, 3, "Beta Strep (Group B Strep)", 16.91, 380, "Parvovirus B19 (Fifth Disease)", 19.63, 1045, "Fetal Alcohol Syndrome", 61.04, 1200, "Spina Bifida", 64.54, 1500, "Sudden Infant Death Syndrome (SIDS)", 78.7, 1500, "Down Syndrome", 85.44, 6000, "Congenital HIV/AIDS", 86.33, 30 ) # Get National CMV logo logo <- image_read("https://github.com/seth-dobson/cmv-charts/blob/master/CMV-Full-Tagline-Logo_Transparent.png?raw=true") # Create chart p <- df %>% ggplot(aes(x = reorder(condition, desc(awareness)), y = awareness)) + geom_col(fill = "#28C1DB") + geom_point( aes(x = condition, y = incidence / 70), size = 4, pch = 21, fill = "#FB791A" ) + scale_y_continuous( sec.axis = sec_axis( ~ . * 70, name = "Number of Children Born with the Condition Each Year (Dots)", labels = scales::comma_format() ) ) + coord_flip() + labs( x = "", y = "Percentage of Women Who Have Heard of the Condition (Bars)", title = "Awareness vs Incidence of Congenital Conditions", caption = "Based on US data from Doutré SM *et al.* (2016) Losing Ground: Awareness of Congenital Cytomegalovirus in the United States. *Journal of Early Hearing Detection and Intervention* 1:39-48. Chart by Artful Analytics, LLC (@_sethdobson).
For more information, visit nationalcmv.org." ) + theme_bw() + theme( plot.title = element_text(face = "bold", hjust = .5), plot.caption = element_textbox_simple(size = 6, margin = margin(10, 0, 0, 0)), axis.text = element_text(color = "black"), axis.title = element_text(size = 10) ) + background_grid(major = "none") + annotate( geom = "text", label = "Number of children\nborn with CMV", x = 7.8, y = 75, color = "#FB791A", size = 3 ) + annotate( geom = "curve", x = 8.5, y = 75, xend = 10, yend = 84, curvature = -.3, arrow = arrow(length = unit(2, "mm")), color = "#FB791A" ) + annotate( geom = "text", label = "% of women who have\nheard of CMV", x = 7.8, y = 30, color = "#28C1DB", size = 3 ) + annotate( geom = "curve", x = 8.5, y = 30, xend = 10, yend = 7, curvature = .20, arrow = arrow(length = unit(2, "mm")), color = "#28C1DB" ) # Combine chart with logo ggdraw() + draw_plot(p) + draw_image( logo, x = .075, y = .1, scale = .2, hjust = .5, vjust = .5 )

A few things to note about the code above:

  • The secondary x-axis is actually coded as a secondary y-axis since
    you have to use coord_flip() to get the categorical variable on the y-axis
    when using geom_col().
  • The sec_axis() function is used in conjuction with the sec.axis
    option within scale_y_continuous(). In order to align the two
    y-axes, I multiplied the secondary axis by 70 within sec_axis()
    and divided incidence by 70 within the aesthetics of geom_point().
    I arrived at the number 70 by trial and error. Not sure why this works,
    but it does.
  • I used the ColorZilla Google
    Chrome extension to get hex color values from the National CMV logo.
    That way I was able to match the colors in the logo to the chart
    elements without a lot of guesswork.
  • I am using the amazing
    ggtext package by Claus Wilke to render the
    plot.caption theme element in markdown, so I can easily italicize
    selected words with asterisks. The element_textbox_simple() function from
    {ggtext} also does word wrapping automatically.
Conclusion

Hopefully you will agree that my combination bar and dot chart (Fig. 2) is an
improvement on the original graph (Fig. 1) in that it highlights the CMV
awareness gap more effectively for a general audience. I also trust that
Hadley would agree that this is an acceptable use of a secondary axis.
Altough, he might not. So nobody tell him, OK?

To learn more about congenital CMV visit
nationalcmv.org.

Questions or comments?

Feel free to reach out to me at any of the social links below.

For more R content, please visit
R-bloggers and
RWeekly.org.

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

To leave a comment for the author, please follow the link and comment on their blog: Artful Analytics. R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job. Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Two Different Methods to Apply Some Corey Hoffstein Analysis to your TAA

Fri, 05/29/2020 - 20:17

[This article was first published on R – QuantStrat TradeR, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here) Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

So, first off: I just finished a Thinkful data science in python bootcamp program that was supposed to take six months, in about four months. All of my capstone projects I applied to volatility trading; long story short, none of the ML techniques worked, and the more complex the technique I tried, the worse it performed. Is there a place for data science in Python in the world? Of course. Some firms swear by it. However, R currently has many more libraries developed specifically for quantitative finance, such as PerformanceAnalytics, quantstrat, PortfolioAnalytics, and so on. Even for more basic portfolio management tasks, I use functions such as Return.Portfolio and charts.PerformanceSummary in R, the equivalent for which I have not seen in Python. While there are some websites with their own dialects built on top of Python, such as quantConnect and quantopian, I think that’s more their own special brand of syntax, as opposed to being able to create freeform portfolio backtesting strategies from pandas.

In any case, here’s my Python portfolio from the bootcamp I completed. The fact that Yahoo’s data ingestion broke on the SHORTVOL index means that the supervised and unsupervised notebooks need their data input replaced by the one in the final capstone project. You can look at the notebooks to see exactly what I tried, but to cut to the chase, none of the techniques worked. Random forests, SVMs, XG boosting, UMAP…they don’t really apply to predicting returns. The features I used were those I use in my own trading strategy, at least some of them, so it wasn’t a case of “garbage in, garbage out”. And the more advanced the technique, the worse the results. In the words of one senior quant trading partner: “Auto-ML = auto-bankrupt”. So when people say “we use AI and machine learning to generate superior returns”, they’ve either found something absolutely spectacular (highly unlikely), or are just using the latest hype terms. After all, even linear regression can be thought of as a learning model.

Even taking PCAs of various term structure features did a worse job than my base volatility trading strategy. Of course, it’s gotten better since then as I added more risk management to the strategy, and caught a nice chunk of the coronavirus long vol move in March. You can subscribe to it here.

So yes, I code in Python now (if the previous post wasn’t any indication, so those who need some Python development for quant work, if it uses the usual numpy/scipy/pandas stack, feel free to reach out to me).

Anyway, this post is about adding some Corey Hoffstein style analysis to asset allocation strategies, this time in R, because this is a technique I used for a very recent freelance project for an asset allocation firm that I currently freelance for (off and on). I call it Corey Hoffstein style, because on twitter, he’s always talking about analyzing the impact of timing luck. His blog at Newfound Research is terrific for thinking about elements one doesn’t see in many other places, such as analyzing trend-following strategies in the context of option payoffs, the impact of timing luck and various parameters of lookback windows, and so on.

The quick idea is this: when you rebalance a portfolio every month, you want to know how changing the various trading day affects your results. This is what Walter does over at AllocateSmartly.

But a more interesting question is what happens when a portfolio is rebalanced on longer timeframes–that is, what happens when you rebalance a portfolio only once a quarter, once every six months, or once a year? What if instead of rebalancing quarterly on January, April, and so on, you rebalance instead on February, May, etc.?

This is a piece of code (in R, so far) that does exactly this:

offset_monthly_endpoints <- function(returns, k, offset) { # because the first endpoint is indexed to 0 and is the first index, add 1 to offset mod_offset = (offset+1)%%k # make sure we don't have 7 month offset on 6 month rebalance--that's just 1. eps <- endpoints(returns, on = 'months') # get monthly endpoints indices <- (1:length(eps)) # create indices from 1 to number of endpoints selected_eps <- eps[indices%%k == mod_offset] # only select endpoints that have proper offset when modded by k selected_eps <- unique(c(0, selected_eps, nrow(returns))) # append start and end of data return(selected_eps) }

Essentially, the idea behind this function is fairly straightforward: given that we want to subset on monthly endpoints at some interval (that is, k = 3 for quarterly, k = 6 for every 6 months, k = 12 for annual endpoints), we want to be able to offset those by some modulo, we use a modulo operator to say “hey, if you want to offset by 4 but rebalance every 3 months, that’s just the same thing as offsetting by 1 month”. One other thing to note is that since R is a language that starts at index 1 (rather than 0), there’s a 1 added to the offset, so that offsetting by 0 will get the first monthly endpoint. Beyond that, it’s simply creating an index going from 1 to the length of the endpoints (that is, if you have around 10 years of data, you have ~120 monthly endpoints), then simply seeing which endpoints fit the criteria of being every first, second, or third month in three.

So here’s how it works, with some sample data:

require(quantmod) require(PerformanceAnalytics) getSymbols('SPY', from = '1990-01-01') > head(SPY[offset_monthly_endpoints(Return.calculate(Ad(SPY)), 3, 1)]) SPY.Open SPY.High SPY.Low SPY.Close SPY.Volume SPY.Adjusted 1993-01-29 43.96875 43.96875 43.75000 43.93750 1003200 26.29929 1993-04-30 44.12500 44.28125 44.03125 44.03125 88500 26.47986 1993-07-30 45.09375 45.09375 44.78125 44.84375 75300 27.15962 1993-10-29 46.81250 46.87500 46.78125 46.84375 80700 28.54770 1994-01-31 48.06250 48.31250 48.00000 48.21875 313800 29.58682 1994-04-29 44.87500 45.15625 44.81250 45.09375 481900 27.82893 > head(SPY[offset_monthly_endpoints(Return.calculate(Ad(SPY)), 3, 2)]) SPY.Open SPY.High SPY.Low SPY.Close SPY.Volume SPY.Adjusted 1993-02-26 44.43750 44.43750 44.18750 44.40625 66200 26.57987 1993-05-28 45.40625 45.40625 45.00000 45.21875 79100 27.19401 1993-08-31 46.40625 46.56250 46.34375 46.56250 66500 28.20059 1993-11-30 46.28125 46.56250 46.25000 46.34375 230000 28.24299 1994-02-28 46.93750 47.06250 46.81250 46.81250 333000 28.72394 1994-05-31 45.73438 45.90625 45.65625 45.81250 160000 28.27249 > head(SPY[offset_monthly_endpoints(Return.calculate(Ad(SPY)), 3, 3)]) SPY.Open SPY.High SPY.Low SPY.Close SPY.Volume SPY.Adjusted 1993-03-31 45.34375 45.46875 45.18750 45.18750 111600 27.17521 1993-06-30 45.12500 45.21875 45.00000 45.06250 437600 27.29210 1993-09-30 46.03125 46.12500 45.84375 45.93750 99300 27.99539 1993-12-31 46.93750 47.00000 46.56250 46.59375 312900 28.58971 1994-03-31 44.46875 44.68750 43.53125 44.59375 788800 27.52037 1994-06-30 44.82812 44.84375 44.31250 44.46875 271900 27.62466

Notice how we get different quarterly rebalancing end dates. This also works with semi-annual, annual, and so on. The one caveat to this method, however, is that when doing tactical asset allocation analysis in R, I subset by endpoints. And since I usually use monthly endpoints in intervals of one (that is, every monthly endpoint), it’s fairly simple for me to incorporate measures of momentum over any monthly lookback period. That is, 1 month, 3 month, etc. are all fairly simple when rebalancing every month. However, for instance, if one were to rebalance every quarter, and take only quarterly endpoints, then getting a one-month momentum measure every quarter would take a bit more work, and if one wanted to do quarterly rebalancing, tranche it every month, but also not simply rebalance at the end of the month, but rebalanace multiple times *throughout* the month, that would require even more meticulousness.

However, one sort of second, “kludge-y” method to go about this, would be to run the backtest to find all the weights, and then apply a similar coding methodology to the *weights*. For instance, if you have a time series of monthly weights, just create an index ranging from 1 to the length of the weights, then depending on how often you want to rebalance, subset for every mod 3 == 0, 1, or 2. More generally, if you rebalance once every k months, you create an index ranging from 1 to the length of your index if the language is base 1 (R), or 0 to length n-1, if Python. Then, you simply see which indices give a remainder of 0 to k-1 when taking the modulo K, and that’s it. This will allow you to get k different rebalancing tranches by taking the indices of those endpoints. And you can still offset those endpoints daily as well. The caveat here, of course, is that you need to run the backtest for all of the individual months, and if you have a complex optimization routine, this can take an unnecessarily long time. So which method you use depends on the task at hand. This second method, however, is what I would use as a wrapper to a monthly rebalancing algorithm that already exists, such as my KDA asset allocation algorithm.

That’s it for this post. In terms of things I want to build going forward: I’d like to port over some basic R functionality to Python, such as Return.Portfolio, and charts.PerformanceSummary, and once I can get that working, to demonstrate how to do a lot of the same asset allocation work I’ve done in R…in Python, as well.

Thanks for reading.

NOTE: I am currently searching for a full-time role to make use of my R (and now Python) skills. If you are hiring, or know of someone that is, don’t hesitate to reach out to me on my LinkedIn.

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

To leave a comment for the author, please follow the link and comment on their blog: R – QuantStrat TradeR. R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job. Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Syntax Highlighting in Blogdown; a very specific solution

Fri, 05/29/2020 - 02:00

[This article was first published on Posts on A stats website , and kindly contributed to R-bloggers]. (You can report issue about the content on this page here) Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

If you spend more than 5 seconds on this site you will be able to tell that it is not one of the snazziest ones around. This is mostly by design but also a because I know very little about web development.

These days it is really easy to have your own R website thanks to blogdown. blogdown interfaces with Hugo to let you have a working site up and running in minutes. A good tutorial to get started can be found here.

When I decided to build this site I knew I wanted a simple design and that I didn’t want to mess about too long with setting it up and so I went looking for Hugo themes and I settled on this one.

As you can see I’ve only got three pages; posts, tags and about. I’d rather like to add an archive and maybe a search bar but the point is I’m happy with the basic structure I’ve got. What’s important to me is that the posts render properly and that they are readable.

Which is why I wanted to add syntax highlighting to my posts. Without it the code chunks in your post look like this:

xgboostParams <- dials::parameters( min_n(), tree_depth(), learn_rate(), finalize(mtry(),select(proc_mwTrainSet,-outcome)), sample_size = sample_prop(c(0.4, 0.9)) )

It is functional but it makes the post look a bit samey. You can play around with the colour of the text to help differentiate between code and not-code.

If you apply syntax highlighting you end up with something more like this:

xgboostParams <- dials::parameters( min_n(), tree_depth(), learn_rate(), finalize(mtry(),select(proc_mwTrainSet,-outcome)), sample_size = sample_prop(c(0.4, 0.9)) )

This looks much nicer in my opinion and makes the post more readable.

So, how do you do it?

The answer won’t be universal but if you are lucky and the theme you’re using already supports it then this might save you some googling.

TL;DR

When creating the a new post through the blogdown Addins be sure to select Rmarkdown as a format and not Rmd.

A bit more detail

To anyone with some knowledge of Hugo the above will be completely obvious and even silly but actually it took me longer than I’d care to admit to get to the answer.

First, I knew that it should be possible to have syntax highlighting in my theme because it is mentioned on the theme’s page:

Hugo has built-in syntax highlighting, provided by Chroma. It is currently enabled in the config.toml file from the exampleSite.
Checkout the Chroma style gallery and choose the style you like.

Also, the config.toml file contains this section which is the bit that actually parametrises the highlighting.

[markup.highlight] codeFences = true hl_Lines = "" lineNoStart = 1 lineNos = false lineNumbersInTable = true noClasses = true style = "solarized-dark" tabWidth = 4

In the code above solarized-dark is the name of the Chroma highlighting style. All the available styles can be found here.

However, I didn’t know how to activate it. In fact according to that description it should come activated by default but none of the posts I had created displayed any highlighting.

After some more googling I stumbled onto this section of the Creating Websites with R Markdown book which outlines the differences between the Rmd and Rmarkdownformats.

Turns out that each format is rendered to HTML through different converters. Rmarkdown uses something called Blackfriday and Rmd uses Pandoc. As I understanding then Rmd is rendered by R and Rmarkdown is rendered by Hugo and so posts need to be rendered by Hugo in order for all the configs in the .toml file to apply.

In the aforementioned book the authors call out some limitations with Rmarkdown; namely that it does not support bibliography nor does it support HTML widgets.

The second one of those is more relevant to my site as I have at least one post that uses widgets. For example, this post contains a leaflet map which is not rendered if I use Rmarkdown. This means that for now if I want to use HTML widgets I’ll have to sacrifice syntax highlighting in those posts. Having said that, I am sure that somebody knows how to apply highlighting to Rmd files but for now I’m ok with the compromise.

One more thing I should say is that my site’s theme requires Hugo version 0.60.1 as a minimum which is quite a recent one. In older posts I found on this issue such as this one there are references to parameters like pygmentsCodefences and pygmentsStyle so if your theme is running on an older Hugo version this might be of help.

Also, if your site’s theme doesn’t already come with syntax highlighting this post might help you out. It goes into quite a bit of detail on how to add highlight.js.

That’s all I’ve got for now. I hope this is useful to at least one other R user lost in the in and outs of how Hugo works.

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

To leave a comment for the author, please follow the link and comment on their blog: Posts on A stats website . R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job. Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Mad methods

Fri, 05/29/2020 - 02:00

[This article was first published on R on OSM, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here) Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Over the past few weeks, we’ve examined the three major methods used to set return expectations as part of the portfolio allocation process. Those methods were historical averages, discounted cash flow models, and risk premia models. Today, we’ll bring all these models together to compare and contrast their accuracy.

Before we make these comparisons, we want to remind readers that we’re now including a python version of the code we use to produce our analyses and graphs. Let us know if you find this useful. Second, let’s remind ourselves why we’re analyzing these return expectations methods in the first place, Recall, to construct a satisfactory portfolio one needs estimates of both return and risk for the various assets that merit inclusion. Once you have those estimates, you can use them to combine assets in numerous ways to generate a bunch of portfolios with different risk-return combinations. Hopefully, some of these combinations will fit your required risk/return criteria.

Our project, then, has been to introduce these different methods so that we can ultimately decide which one seems the most useful to help make portfolio allocation decisions. Importantly, we should define the criteria for how we’re going to assess these methods. In the prior posts, we examined the methods’ ability to forecast future returns using the error rate as the main judge. When we actually get down to deciding on our asset allocation, we’re not going to be as concerned with having the best estimate of future returns. Rather, we’re looking to identify portfolios that offer the best risk-adjusted returns based on our estimates. This might seem like too fine of a point. But it is important to note that, assuming we’re trying to build a reasonably diversified portfolio, the accuracy of any individual return estimate will be dampened by the number of assets in the portfolio. Thus we don’t need “perfect” estimates to create a “good” portfolio.

In any case, here’s the plan for this post. We’ll start off exploring the different methods graphically. Then we’ll run the basic models over different time frames. Then we’ll consider whether pooling the predictions can improve results.

Before we begin, it’s important to flag some of the methodological hurdles this analysis faces. We’re using data for the S&P 500, US nominal GDP, and the market risk premium, as published by Dartmouth professor Kenneth R. French. Additionally, the discounted cash flow (DCF) model we analyzed in a prior post, dispensed with the dividend yield and assumed the required return would be roughly equivalent to the growth rate of nominal GDP. This was clearly a short cut and could be considered sloppy.1 For this post we fix that so that the risk premium is calculated via a simplified DCF.

There are also data compilation issues: the frequencies don’t all align. S&P data are daily, GDP data are quarterly, and market risk premium data are estimated annually or monthly (but with a much shorter time frame). Hence, we decided to go with annual data since that would provide us with the longest overlapping series, starting in 1960. Such frequency matches the shortest time horizon for most longer term oriented professional and individual investors. Many, however, use shorter time spans.

Finally, let’s summarize how each method is constructed:

  • Naive: This is our benchmark. It essentially takes last year’s return as the predictor for next year’s.

  • Historical: This is the cumulative return for the data series. Hence, as we roll forward in time any single year’s influence on the prediction decreases at a decelerating rate.

  • DCF: This is the expected return calculated using the Gordon Growth model, based on the S&P’s dividend yield, and nominal GDP growth, as a proxy for the growth rate. GDP growth is calculated using the twelve months ending in the third quarter. Recall, final GDP data are delayed by a quarter, so anyone conducting the analysis in real time would only have third quarter data available at year-end.

  • Risk premium: This is the market risk premium plus risk-free rate, as published by Prof. Kenneth R. French on his website.

Housekeeping and introductions over, let’s begin. We’ll first compile the data and then graph the end-of-year “signal” on the x-axis and the following year’s return for the S&P 500 on the y-axis.

A few points to notice. The Naive, DCF, and Risk premium methods show almost no relationship between the explanatory variable and next year’s return. Historical averages suggest a negative relationship with forward returns consistent with the mean reversion present in the series.2.

We could run a few more graphs, but let’s jump into the machine learning. We’ll train the models on 70% of the data, reserving the remainder for the test set. We’ll then show the outputs of the models with actual vs predicted graphs on the out-of-sample data. We’ll follow that with accuracy tables based on root mean-squared error (RMSE) and the error scaled by average of the actual results.

Here’s the first model in which we regress the one-year forward return on the S&P against the return expectation generated by the respective method. We include a 45o line to highlight error.

Both the Naive and DCF methods sport a fair amount of clustering. The Historical and Risk premium methods are better distributed. Note that even though we included a 45o line it doesn’t appear that way, due to the graphing algorithm, which does not scale each axis equally. Of course, if it did, we wouldn’t be able to make out the dispersion in the predicted values given the spread in the actuals. Another problem is 2008, the global financial crisis, which posted a loss close to 40%. It looks like an outlier, but needs to be there given the character of financial returns. Let’s look at the accuracy scores.

Table 1: Machine learning accuracy one-year forward returns Model Train: RMSE Train: RMSE scaled Test: RMSE Test: RMSE scaled Naive 0.16 2.03 0.16 1.72 Historical 0.15 1.94 0.16 1.73 DCF 0.16 2.02 0.16 1.73 Risk premium 0.16 2.02 0.16 1.71

The RMSE for both the training and test sets are very similar both within and across groups. The scaled error declines from training to test sets for all methods. This is because the average returns in the test set are about 17% higher than the training set. So the improvement is data, not model, related. Whatever the case, no method stands out as better on the test set.

Now, we’ll look at how well each of the methods predicts average returns over the next three years. The averaging should smooth out some of the volatility and produce slightly better forecasts. First, up is the actual vs. predicted graph.

In general, the clustering improves, except for the DCF method. In most cases, the models’ predicted values over or undershoot the actual values equally. Let’s check out the accuracy scores.

Table 2: Machine learning accuracy average three-year forward returns Model Train: RMSE Train: RMSE scaled Test: RMSE Test: RMSE scaled Naive 0.09 1.09 0.07 0.97 Historical 0.09 1.09 0.07 0.96 DCF 0.09 1.09 0.07 0.97 Risk premium 0.09 1.09 0.07 0.97

As expected, the accuracy is better on both the absolute and scaled RMSE. There appears little difference among the methods as well.

Now, we’ll look at each method against the average five-year forward return to see if that improves accuracy further. Here’s the actual vs. predicted graph.

Clustering improves modestly. Outliers persist. In this instance, the historical method seems to show the most balanced dispersion around the 45o line. Note that for the Risk premium method, the model appears to over estimate returns slightly, as shown by more points below the line than above. Let’s check on the accuracy.

Table 3: Machine learning accuracy average five-year forward returns Model Train: RMSE Train: RMSE scaled Test: RMSE Test: RMSE scaled Naive 0.07 0.81 0.06 0.90 Historical 0.07 0.80 0.05 0.83 DCF 0.07 0.82 0.06 0.88 Risk premium 0.07 0.82 0.06 0.89

The absolute and scaled RMSE improved again. The Historical method saw the biggest improvement, which should be expected if there is some latent mean reversion in the returns. Importantly, all the three methods show better scaled accuracy than the Naive benchmark; though, the Risk Premium method isn’t that dramatic.

Which method should we choose? Part of that depends on what our time horizon is. If we want to rebalance the portfolio every year based on revised expectations, then one might prefer the Risk premium method. But it’s accuracy isn’t so much better that it’s a clear winner.

If we’ re looking at longer time frames, then the Historical or DCF methods appear to enjoy better accuracy, with the Historical being meaningfully better on the five-year horizon. It’s also the easiest to calculate. But the warning is that there is no guarantee that the future will look at all like the past.

If one method is good, shouldn’t three be better? We’ll now examine how a multivariate model of all three methods performs and then compare it to an ensemble approach where we average the predictions of each method. Before we do that we should check the correlations among the different explanatory variables to avoid multi-collinearity. In other words, if the different methods are highly correlated, using them to model or predict the same thing will generate results that seem more significant than they actually are.

Table 4: Explanatory variable correlations Historical DCF Risk premium Historical 1.00 -0.56 0.36 DCF -0.56 1.00 -0.21 Risk premium 0.36 -0.21 1.00

As the table shows, the correlations are relatively low and/or negative, so we should be safe. While we’ve run through all the time periods offline, we’ll show the average five-year forward return only for the sake of brevity. It also performs the best relative to the other methods. The usual actual vs. predicted graph is below.

We now compare the multivariate accuracy to the average of the univariate methods plus the Naive method for the average five-year forward return. Note that for univariate average, we’re averaging the prediction of each method and then comparing that to the actual.

Table 5: Machine learning accuracy average five-year forward returns Model Test: RMSE Test: RMSE scaled Naive 0.06 0.90 Univariate 0.06 0.86 Multivariate 0.06 0.86

The multivariate method performs about the same as averaging the predictions generated by each of the univariate methods. Both perform better than the Naive method on a scaled basis. For many folks, this is probably as far as one wants or needs to go. The ensemble methods perform better than all but the Historical method. And both univariate and multivariate outperform the Naive method. Since there’s little difference between the univariate average and multivariate model, using the multivariate is probably more efficient since it cuts down on calculations (thanks for R).

What have we learned? No particular method tends to dominate the others on all time frames. In general, as we extend the time frame, prediction accuracy improves for all methods, including the Naive. Of course, all this relates only to one proxy of one asset class: the S&P 500 for equities. To build a diversified portfolio one would want to conduct the same analyses for bonds, and real assets too.

That prediction accuracy improved with longer time frames seems a key insight to us. It wasn’t better models that improved the results, it was a more nuanced understanding of the data’s structure. Of course, we didn’t employ more sophisticated models like random forests or neural networks, nor did we experiment with any sort of feature selection. These avenues might prove fruitful or dead-ends. Whichever the case, fleshing them out would require additional posts.

Our next step, will be extend these methods across the different asset classes to build a diversified portfolio. Alternatively, we could explore more sophisticated statistical learning techniques before moving on. If you have a view on which you’d prefer, send us a message at the email below. Until next time the R code, followed by the python code for this post is below.

# Built using R 3.6.2 ## Load packages suppressPackageStartupMessages({ library(tidyquant) library(tidyverse) library(readxl) library(httr) }) ## Load data ## Note this is a bit messy as I already had the files from the different posts, which I loaded ## and then joined. But it's not very helpful to see df <- readRDS("clean_file.rds"). So I show how I ## pulled the data for the various files from each of the previous posts. Probably a cleaner way to do ## this, but I hope you get the picture. ## Fama-French factors url <- "http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/ftp/F-F_Research_Data_Factors_CSV.zip" GET(url, write_disk(tf1 <- tempfile(fileext = ".zip"))) erp <- read_csv(unzip(tf1), skip=3) erp <- erp %>% slice(1130:(nrow(erp)-1)) %>% rename("year" = X1, "mkt" = `Mkt-RF`, "rf" = RF) ## GDP data to align with S&P 500 sp <- getSymbols("^GSPC", src = "yahoo", from = "1950-01-01", to = "2020-01-01", auto.assign = FALSE) %>% Ad() %>% `colnames<-`("sp") sp_qtr <- to.quarterly(sp, indexAt = "lastof", OHLC = FALSE) gdp <- getSymbols("GDP", src = "FRED", from = "1950-01-01", to = "2020-01-01", auto.assign = FALSE) %>% `colnames<-`("gdp") qtr <- index(to.quarterly(sp["1950/2019"], indexAt = "lastof", OHLC = FALSE)) gdp_eop <- xts(coredata(gdp["1950/2019"]), order.by = qtr) gdp_growth <- gdp %>% mutate(year = year(date), gdp_lag = lag(gdp_ret)) %>% group_by(year) %>% mutate(gdp_yr = sum(gdp_lag)) %>% distinct(year, gdp_yr) ## Damodaran risk premium url1 <- "http://www.stern.nyu.edu/~adamodar/pc/datasets/histimpl.xls" GET(url1, write_disk(tf <- tempfile(fileext = ".xls"))) irp <- read_excel(tf, sheet = "Historical Impl Premiums", skip = 6) ipr <- irp[-61,] df <- irp %>% left_join(gdp_growth, by = "year") %>% left_join(erp, by = "year") %>% select(-c(t_bill, bond_bill, erp_ddm, erp_fcfes, SMB, HML)) %>% mutate_at(vars(mkt, rf), function(x) x/100) %>% mutate(cum_ret = cummean(ifelse(is.na(sp_ret),0,sp_ret)), req_ret = div_yld*(1+gdp_yr)+gdp_yr, rp = mkt + rf, sp_fwd3 = lead(sp_fwd,3), sp_mu3 = runMean(ifelse(is.na(sp_fwd3),0,sp_fwd3), 3), sp_fwd5 = lead(sp_fwd,5), sp_mu5 = runMean(ifelse(is.na(sp_fwd5),0,sp_fwd5), 5)) ## Graph previous vs next df %>% select(year, sp_ret, cum_ret, req_ret, rp, sp_fwd) %>% gather(key,value, -c(year, sp_fwd)) %>% mutate(key = factor(key, levels = c("sp_ret", "cum_ret", "req_ret", "rp"))) %>% ggplot(aes(value*100, sp_fwd*100))+ geom_point(color = "darkblue", size = 2, alpha = 0.5) + geom_smooth(method = "lm", se = FALSE, linetype="dashed", color="slategrey") + facet_wrap(~key, scales = "free", labeller = as_labeller(c(sp_ret = "Naive", cum_ret = "Historical", req_ret = "DCF", rp = "Risk premium"))) + labs(x = "Indicator (%)", y = "Forward return (%)", title = "Capital market expectations methods", caption = "Source: Yahoo, FRED, Prof K.R. French, OSM Estimates") + theme(plot.caption = element_text(hjust = 0)) ### Machine learning ## Create and run machine learning function and rmse table output function ml_func <- function(train_set, test_set, x_var1, y_var){ form <- as.formula(paste(y_var, " ~ ", x_var1)) mod <- lm(form, train_set) act_train <- train_set[,y_var] %>% unlist() %>% as.numeric() act_test <- test_set[,y_var] %>% unlist() %>% as.numeric() pred_train <- predict(mod, train_set) rmse_train <- sqrt(mean((pred_train - act_train)^2, na.rm = TRUE)) rmse_train_scaled <- rmse_train/mean(act_train,na.rm = TRUE) pred_test <- predict(mod, test_set) rmse_test <- sqrt(mean((pred_test - act_test)^2, na.rm = TRUE)) rmse_test_scaled <- rmse_test/mean(act_test ,na.rm = TRUE) out = list(coefs = coef(mod), pred_train=pred_train, rmse_train=rmse_train, rmse_train_scaled=rmse_train_scaled, pred_test=pred_test, rmse_test=rmse_test, rmse_test_scaled=rmse_test_scaled) out } rmse_table <- function(naive, hist, dcf, rp){ data.frame(Model = c("Naive", "Historical", "DCF", "Risk premium"), train_rmse = c(naive$rmse_train, hist$rmse_train, dcf$rmse_train, rp$rmse_train), train_rmse_scaled = c(naive$rmse_train_scaled, hist$rmse_train_scaled, dcf$rmse_train_scaled, rp$rmse_train_scaled), test_rmse = c(naive$rmse_test, hist$rmse_test, dcf$rmse_test, rp$rmse_test), test_rmse_scaled = c(naive$rmse_test_scaled, hist$rmse_test_scaled, dcf$rmse_test_scaled, rp$rmse_test_scaled)) %>% mutate_at(vars(-Model), function(x) round(x,2)) %>% rename("Train: RMSE" = train_rmse, "Train: RMSE scaled" = train_rmse_scaled, "Test: RMSE" = test_rmse, "Test: RMSE scaled" = test_rmse_scaled) %>% knitr::kable(caption = "Machine learning accuracy") } ml_output_graf <- function(df, y_var = "sp_fwd", naive, hist, dcf, rp){ df[,y_var] %>% `colnames<-`("actual") %>% mutate(naive = naive$pred_test, hist = hist$pred_test, dcf = dcf$pred_test, rp = rp$pred_test) %>% select(actual, naive, hist, dcf, rp) %>% gather(key, value, -actual) %>% mutate(key = factor(key, levels = c("naive", "hist", "dcf", "rp"))) %>% ggplot(aes(value*100, actual*100)) + geom_point(color = "darkblue", size = 2, alpha = 0.5) + geom_abline(linetype="dashed", color="slategrey") + facet_wrap(~key, scales = "free", labeller = as_labeller(c(naive = "Naive", hist = "Historical", dcf = "DCF", rp = "Risk premium"))) + labs(x = "Predicted (%)", y = "Actual (%)", title = "Actual vs. predicted out of sample", caption = "Source: Yahoo, FRED, Prof K.R. French, OSM Estimates") + theme(plot.caption = element_text(hjust = 0)) } ## Split data split <- round(nrow(df)*.7,0) train <- df[1:split,] test <- df[(split+1):nrow(df),] ## Split data split <- round(nrow(df)*.7,0) train <- df[1:split,] test <- df[(split+1):nrow(irp_growth),] ## Run one yar model naive <- ml_func(train, test, "sp_ret", "sp_fwd") hist <- ml_func(train,test, "cum_ret", "sp_fwd") dcf <- ml_func(train,test, "req_ret", "sp_fwd") rp <- ml_func(train,test, "rp", "sp_fwd") ## Show graph ml_output_graf(test, y_var="sp_fwd", naive, hist, dcf, rp) ## Print table rmse_table(naive, hist, dcf, rp) ## Run 3 year average naive <- ml_func(train, test, "sp_ret", "sp_mu3") hist <- ml_func(train,test, "cum_ret", "sp_mu3") dcf <- ml_func(train,test, "req_ret", "sp_mu3") rp <- ml_func(train,test, "rp", "sp_mu3") ## Show graph ml_output_graf(test, y_var = "sp_mu3", naive, hist, dcf, rp) ## Print table rmse_table(naive, hist, dcf, rp) ## Run 5 year average naive <- ml_func(train, test, "sp_ret", "sp_mu5") hist <- ml_func(train,test, "cum_ret", "sp_mu5") dcf <- ml_func(train,test, "gdp_yr", "sp_mu5") rp <- ml_func(train,test, "erp_fcfe", "sp_mu5") ## Show graph ml_output_graf(test, "sp_mu5", naive, hist, dcf, rp) ## Print table rmse_table(naive, hist, dcf, rp) ## Correlation table train %>% select(cum_ret, req_ret, rp) %>% rename("Historical" = cum_ret, "DCF" = req_ret, "Risk premium" = rp) %>% cor(.) %>% round(.,2) %>% knitr::kable(caption= "Correlations of explanatory variables") ## Run multivariate regresssion combo_mod <- lm(sp_mu5 ~ cum_ret + req_ret + rp, train) preds <- predict(combo_mod, test) rmse_comb <- sqrt(mean((preds - test$sp_mu5)^2, na.rm = TRUE)) rmse_comb_scaled <- rmse_comb/mean(test$sp_mu5, na.rm = TRUE) ## Graph of multivariate actual vs predicted ggplot() + geom_point(aes(preds*100, test$sp_mu5*100), color = "darkblue", size = 2, alpha = 0.5) + geom_abline(linetype="dashed", color="slategrey") + labs(x = "Predicted (%)", y = "Actual (%)", title = "Actual vs. predicted out of sample", caption = "Source: Yahoo, FRED, Prof K.R. French, OSM Estimates") + theme(plot.caption = element_text(hjust = 0)) # Caclualate average of predictions and accuracy preds_mean <- rowMeans(cbind(hist$pred_test, dcf$pred_test, rp$pred_test)) rmse_mean <- sqrt(mean((preds_mean - test$sp_mu5)^2, na.rm = TRUE)) rmse_mean_scaled <- rmse_comb/mean(test$sp_mu5, na.rm = TRUE) # Print table of the results data.frame(Model = c("Naive", "Univariate", "Multivariate"), test_rmse = c(naive$rmse_test, rmse_mean, rmse_comb), test_rmse_scaled = c(naive$rmse_test_scaled, rmse_mean_scaled, rmse_comb_scaled)) %>% mutate_at(vars(-Model), function(x) round(x,2)) %>% rename("Test: RMSE" = test_rmse, "Test: RMSE scaled" = test_rmse_scaled) %>% knitr::kable(caption = "Machine learning accuracy")

And here’s the python code for pythonistas:

# Built using python 3.7 # NOTE: Python results don't always match what I get in R. Sometimes I think that might be the way # scikit-learn handles NaNs, but I'm not entirely sure. Let me know if you get something wildly # different. #### Load libraries and data import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline plt.style.use('ggplot') sns.set() # Note the R code shows how I wrangled the data. Due to time constraints, I saved that as a csv # and loaded in the python enviroment. I hope to come back and show the pythonic way to get the data # at some point. df = pd.read_csv('cme_methods.csv') #### Plot different methods fig = plt.figure(figsize=(14,10)) a1 = fig.add_subplot(221) a2 = fig.add_subplot(222) a3 = fig.add_subplot(223) a4 = fig.add_subplot(224) sns.regplot(df.sp_ret, df.sp_fwd, ax = a1, color="darkblue", ci=None) sns.regplot(df.cum_ret, df.sp_fwd, ax = a2,color="darkblue", ci=None) sns.regplot(df.req_ret, df.sp_fwd, ax = a3,color="darkblue", ci=None) sns.regplot(df.rp, df.sp_fwd, ax = a4,color="darkblue", ci=None) axs = [a1, a2, a3, a4] titles = ["Naive", "Historical", "DCF", "Risk premium"] for i in range(4): axs[i].set_title(titles[i], fontsize=14) if i < 2: axs[i].set_xlabel("") else: axs[i].set_xlabel("Actual") if i % 2 != 0: axs[i].set_ylabel("") else: axs[i].set_ylabel("Return") fig.suptitle("Return expectations methods") plt.figtext(0,0, "Source: Prof. A. Damodaran, NYU") # plt.tight_layout() plt.show() #### Machine learning x = df[['sp_ret', 'cum_ret', 'req_ret', 'rp']] y = df[['sp_fwd', 'sp_mu3', 'sp_mu5']] #### Create machine learning function def ml_func(x, y, x_var, y_var): from sklearn.linear_model import LinearRegression rows = int(len(x)*.7) x_train = x.loc[1:rows, x_var].values.reshape(-1,1) y_train = y.loc[1:rows, y_var].values.reshape(-1,1) x_test = x.loc[rows:, x_var].values.reshape(-1,1) y_test = y.loc[rows:, y_var].values.reshape(-1,1) mod = LinearRegression() mod.fit(x_train, y_train) pred_train = mod.predict(x_train) rmse_train = np.sqrt(np.nanmean((pred_train - y_train)**2)) rmse_train_scaled = rmse_train/np.nanmean(y_train) pred_test = mod.predict(x_test) rmse_test = np.sqrt(np.nanmean((pred_test - y_test)**2)) rmse_test_scaled = rmse_test/np.nanmean(y_test) return pred_test, y_test, rmse_train, rmse_train_scaled, rmse_test, rmse_test_scaled #### Run 1-year model naive = ml_func(x, y, 'sp_ret', 'sp_fwd') hist = ml_func(x, y, 'cum_ret', 'sp_fwd') dcf = ml_func(x, y, 'req_ret', 'sp_fwd') rp = ml_func(x, y, 'rp', 'sp_fwd') #### Create graphing function def plot_ml(naive, hist, dcf, rp): fig = plt.figure(figsize=(14,10)) a1 = fig.add_subplot(221) a2 = fig.add_subplot(222) a3 = fig.add_subplot(223) a4 = fig.add_subplot(224) a1.scatter(naive[0]*100, naive[1]*100, color="darkblue", s=100, alpha=0.5) xn = naive[0]*100 yn = xn a1.plot(xn,yn, linestyle=":", color='grey') a2.scatter(hist[0]*100, hist[1]*100, color="darkblue", s=100, alpha=0.5) xh = hist[0]*100 yh = xh a2.plot(xh,yh, linestyle=":",color='grey') a3.scatter(dcf[0]*100, dcf[1]*100, color="darkblue", s=100, alpha=0.5) xd = dcf[0]*100 yd = xd a3.plot(xd,yd, linestyle=":",color='grey') a4.scatter(rp[0]*100, rp[1]*100, color="darkblue", s=100, alpha=0.5) xr = rp[0]*100 yr = xr a4.plot(xr,yr, linestyle=":",color='grey') axs = [a1, a2, a3, a4] titles = ["Naive", "Historical", "DCF", "Risk premium"] for i in range(4): axs[i].set_title(titles[i], fontsize=14) if i < 2: axs[i].set_xlabel("") else: axs[i].set_xlabel("Predicted") if i % 2 != 0: axs[i].set_ylabel("") else: axs[i].set_ylabel("Actual") fig.suptitle("Machine learning methods") plt.figtext(0,0, "Source: Yahoo, FRED, Prof. A. Damodaran, NYU, OSM estimates") # plt.tight_layout() plt.show() #### Graph 1-year model plot_ml(naive, hist, dcf, rp) #### Create ml table def ml_table(naive, hist, dcf, rp): table = pd.DataFrame({'Model': ["Naive", "Historical", "DCF", "Risk premium"], 'Train RMSE': [naive[2], hist[2], dcf[2], rp[2]], "Train: RMSE scaled" : [naive[3], hist[3], dcf[3], rp[3]], "Test: RMSE": [naive[4], hist[4], dcf[4], rp[4]], "Test: RMSE scaled": [naive[5], hist[5], dcf[5], rp[5]]}) return round(table,2) #### Print 1-year model table ml_table(naive, hist, dcf, rp) #### Run Average 3-year forward models # Need to index at 2 since scikit-learn doesn't seem to handle NaNs like R handles NAs naive = ml_func(x[2:], y[2:], "sp_ret", "sp_mu3") hist = ml_func(x[2:], y[2:], "cum_ret", "sp_mu3") dcf = ml_func(x[2:], y[2:], "req_ret", "sp_mu3") rp = ml_func(x[2:] ,y[2:], "rp", "sp_mu3") ##### Plot actual vs predicted plot_ml(naive, hist, dcf, rp) ##### Print accuracy tables ml_table(naive, hist, dcf, rp) #### Run averae five-year forward models # Need to index at 4 since scikit-learn doesn't seem to handle NaNs like R handles NAs naive = ml_func(x[4:], y[4:], "sp_ret", "sp_mu5") hist = ml_func(x[4:], y[4:], "cum_ret", "sp_mu5") dcf = ml_func(x[4:], y[4:], "req_ret", "sp_mu5") rp = ml_func(x[4:] ,y[4:], "rp", "sp_mu5") plot_ml(naive, hist, dcf, rp) ml_table(naive, hist, dcf, rp) #### Correlation analysis rows = int(len(x)*.7) corr_df = x.loc[1:rows,['cum_ret', 'req_ret', 'rp']] corr_df.columns = ['Historical', 'DCF', 'Risk premium'] corr_df.corr().round(2) #### Multivariate model from sklearn.linear_model import LinearRegression rows = int(len(x)*.7) x_train = x.loc[4:rows, ['cum_ret', 'req_ret', 'rp']].values.reshape(-1,3) y_train = y.loc[4:rows, 'sp_mu5'].values.reshape(-1,1) x_test = x.loc[rows:, ['cum_ret', 'req_ret', 'rp']].values.reshape(-1,3) y_test = y.loc[rows:, 'sp_mu5'].values.reshape(-1,1) mod = LinearRegression() mod.fit(x_train, y_train) # pred_train = mod.predict(x_train) # rmse_train = np.sqrt(np.nanmean((pred_train - y_train)**2)) # rmse_train_scaled = rmse_train/np.nanmean(y_train) pred_test = mod.predict(x_test) rmse_test = np.sqrt(np.nanmean((pred_test - y_test)**2)) rmse_test_scaled = rmse_test/np.nanmean(y_test) # return pred_test, y_test, rmse_train, rmse_train_scaled, rmse_test, rmse_test_scaled plt.figure(figsize=(12,6)) plt.scatter(pred_test*100, y_test*100, color="darkblue", s=100, alpha=0.5) xs = pred_test*100 ys = xs plt.plot(xs,ys, linestyle=":",color='grey') plt.xlabel("Predicted return (%)") plt.ylabel("Actual return(%)") plt.title("Actual vs. predicted results for machine learning model") plt.show() preds_mean = pd.DataFrame({'hist':hist[0].reshape(21), 'dcf':dcf[0].reshape(21), 'rp':rp[0].reshape(21)}, index=np.arange(0,21)).mean(axis=1) rmse_mean = np.sqrt(np.mean((preds_mean.values - hist[1])**2)) rmse_mean_scaled = rmse_mean/np.mean(hist[1]) multi_mods = pd.DataFrame({'Model':["Naive", "Univariate", "Multivariate"], 'Test: RMSE': [naive[4],rmse_mean, rmse_test], 'Test: RMSE scale': [naive[5], rmse_mean_scaled, rmse_test_scaled]}) multi_mods.round(2)
  1. The results are actually not that different. But we don’t want to get buried in too many details.

  2. All covariance stationary data time series have a finite mean reverting level. Who knows what that means, but we’ve always wanted to write that!

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

To leave a comment for the author, please follow the link and comment on their blog: R on OSM. R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job. Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

pins 0.4: Versioning

Fri, 05/29/2020 - 02:00

[This article was first published on pins, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here) Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

A new version of pins is available on CRAN today, which adds support for versioning your datasets and DigitalOcean Spaces boards!

As a quick recap, the pins package allows you to cache, discover and share resources. You can use pins in a wide range of situations, from downloading a dataset from a URL to creating complex automation workflows (learn more at pins.rstudio.com). You can also use pins in combination with TensorFlow and Keras; for instance, use cloudml to train models in cloud GPUs, but rather than manually copying files into the GPU instance, you can store them as pins directly from R.

To install this new version of pins from CRAN, simply run:

install.packages("pins")

You can find a detailed list of improvements in the pins NEWS file.

Versioning

To illustrate the new versioning functionality, let’s start by downloading and caching a remote dataset with pins. For this example, we will download the weather in London, this happens to be in JSON format and requires jsonlite to be parsed:

library(pins) weather_url <- "https://samples.openweathermap.org/data/2.5/weather?q=London,uk&appid=b6907d289e10d714a6e88b30761fae22" pin(weather_url, "weather") %>% jsonlite::read_json() %>% as.data.frame() coord.lon coord.lat weather.id weather.main weather.description weather.icon 1 -0.13 51.51 300 Drizzle light intensity drizzle 09d

One advantage of using pins is that, even if the URL or your internet connection becomes unavailable, the above code will still work.

But back to pins 0.4! The new signature parameter in pin_info() allows you to retrieve the “version” of this dataset:

pin_info("weather", signature = TRUE) # Source: local [files] # Signature: 624cca260666c6f090b93c37fd76878e3a12a79b # Properties: # - path: weather

You can then validate the remote dataset has not changed by specifying its signature:

pin(weather_url, "weather", signature = "624cca260666c6f090b93c37fd76878e3a12a79b") %>% jsonlite::read_json()

If the remote dataset changes, pin() will fail and you can take the appropriate steps to accept the changes by updating the signature or properly updating your code. The previous example is useful as a way of detecting version changes, but we might also want to retrieve specific versions even when the dataset changes.

pins 0.4 allows you to display and retrieve versions from services like GitHub, Kaggle and RStudio Connect. Even in boards that don’t support versioning natively, you can opt-in by registering a board with versions = TRUE.

To keep this simple, let’s focus on GitHub first. We will register a GitHub board and pin a dataset to it. Notice that you can also specify the commit parameter in GitHub boards as the commit message for this change.

board_register_github(repo = "javierluraschi/datasets", branch = "datasets") pin(iris, name = "versioned", board = "github", commit = "use iris as the main dataset")

Now suppose that a colleague comes along and updates this dataset as well:

pin(mtcars, name = "versioned", board = "github", commit = "slight preference to mtcars")

From now on, your code could be broken or, even worse, produce incorrect results!

However, since GitHub was designed as a version control system and pins 0.4 adds support for pin_versions(), we can now explore particular versions of this dataset:

pin_versions("versioned", board = "github") # A tibble: 2 x 4 version created author message 1 6e6c320 2020-04-02T21:28:07Z javierluraschi slight preference to mtcars 2 01f8ddf 2020-04-02T21:27:59Z javierluraschi use iris as the main dataset

You can then retrieve the version you are interested in as follows:

pin_get("versioned", version = "01f8ddf", board = "github") # A tibble: 150 x 5 Sepal.Length Sepal.Width Petal.Length Petal.Width Species 1 5.1 3.5 1.4 0.2 setosa 2 4.9 3 1.4 0.2 setosa 3 4.7 3.2 1.3 0.2 setosa 4 4.6 3.1 1.5 0.2 setosa 5 5 3.6 1.4 0.2 setosa 6 5.4 3.9 1.7 0.4 setosa 7 4.6 3.4 1.4 0.3 setosa 8 5 3.4 1.5 0.2 setosa 9 4.4 2.9 1.4 0.2 setosa 10 4.9 3.1 1.5 0.1 setosa # … with 140 more rows

You can follow similar steps for RStudio Connect and Kaggle boards, even for existing pins! Other boards like Amazon S3, Google Cloud, Digital Ocean and Microsoft Azure require you explicitly enable versioning when registering your boards.

DigitalOcean

To try out the new DigitalOcean Spaces board, first you will have to register this board and enable versioning by setting versions to TRUE:

library(pins) board_register_dospace(space = "pinstest", key = "AAAAAAAAAAAAAAAAAAAA", secret = "ABCABCABCABCABCABCABCABCABCABCABCABCABCA==", datacenter = "sfo2", versions = TRUE)

You can then use all the functionality pins provides, including versioning:

# create pin and replace content in digitalocean pin(iris, name = "versioned", board = "pinstest") pin(mtcars, name = "versioned", board = "pinstest") # retrieve versions from digitalocean pin_versions(name = "versioned", board = "pinstest") # A tibble: 2 x 1 version 1 c35da04 2 d9034cd

Notice that enabling versions in cloud services requires additional storage space for each version of the dataset being stored:

To learn more visit the Versioning and DigitalOcean articles. To catch up with previous releases:

Thanks for reading along!

// add bootstrap table styles to pandoc tables function bootstrapStylePandocTables() { $('tr.header').parent('thead').parent('table').addClass('table table-condensed'); } $(document).ready(function () { bootstrapStylePandocTables(); });

$(document).ready(function () { window.buildTabsets("TOC"); });

$(document).ready(function () { $('.tabset-dropdown > .nav-tabs > li').click(function () { $(this).parent().toggleClass('nav-tabs-open') }); });


(function () { var script = document.createElement("script"); script.type = "text/javascript"; script.src = "https://mathjax.rstudio.com/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"; document.getElementsByTagName("head")[0].appendChild(script); })();

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

To leave a comment for the author, please follow the link and comment on their blog: pins. R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job. Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

How to publish a Shiny app: example with shinyapps.io

Fri, 05/29/2020 - 02:00

[This article was first published on R on Stats and R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here) Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Introduction

The COVID-19 virus led many people to create interactive apps and dashboards. A reader recently asked me how to publish a Shiny app she just created. Similarly to a previous article where I show how to upload R code on GitHub, I thought it would be useful to some people to see how I publish my Shiny apps so they could do the same.

Before going through the different steps required to deploy your Shiny app online, you can check the final result with my apps here.

Note 1: The screenshots have been taken on MacOS and I have not tested it on Windows. Do not hesitate to let me know in the comments whether it is similar or not on other operating systems.

Note 2: There are other ways to publish your app (with Docker for example), but the method shown below is (in my opinion) easy and works well.

Prerequisite

I personally use the shinyapps.io platform to deploy my Shiny apps. So in order to follow this guide you will first need to create an account (if you do not already have one).

They offer a free plan, but you are limited to 5 active applications and a monthly usage of 25 active hours.

For your information, if you make your app available to a wide audience, expect to exceed the monthly cap of active hours quite quickly. To increase the monthly limit (or to publish more than 5 apps), you will need to upgrade your plan to a paying one.

Step-by-step guide

Below the steps to follow in pictures.

Step 1: Open RStudio and create a new Shiny app:

Step 2: Give it a name (without space), choose where to save it and click on the Create button:

Step 3: In the same way as when you open a new R Markdown document, the code for a basic Shiny app is created. Run the app by clicking on the Run App button to see the result:

Step 4: The basic app opens, publish it:

Step 5: If it is your first Shiny app, the box “Publish From Account” should be empty. Click on “Add New Account” to link the shinyapps.io account you just created:

Step 6: Click on the first alternative (ShinyApps.io):

Step 7: Click on the link to your ShinyApps account:

Step 8: Click on the Dashboard button to log in into your account:

Step 9: Click on your name and then on Tokens

Step 10: If this is your first app, there should be no token already created. Create one by clicking on the Add Token button. Then Click on the Show button:

Step 11: Click on the Show Secret button:

Step 12: Now the code is complete (nothing is hidden anymore). Click on the Copy to clipboard button:

Step 13: Copy the code and click on the OK button:

Step 14: Go back to RStudio, paste the code in the console and run it:

Your computer is now authorized to deploy applications to your shinyapps.io account.

Step 15: Go back to the window where you can publish your app, choose a title (without space) and click on the Publish button:

Step 16: After several seconds (depending on the weight of your app), the Shiny app should appear in your internet browser:

Step 17: You can now edit the app (or replace the entire code by another of your app), and run the app again by clicking on the Run App button. For this illustration, I just added a link for more information in the side panel:

Step 18: Check that the modifications have been taken into account (the link appears in the side panel as expected) and republish your app:

Step 19: Click on the Publish button:

Step 20: Your app is live! You can now share it and everyone with the link will be able to use it:

Additional notes

If you need to change the settings of your Shiny app, go to your shinyapps.io dashboard and click on the app you just created to access the settings:

See the different settings in the tabs located at the top of the windows, and see the link to the app next to the URL field:

Thanks for reading. I hope this tutorial helped you to publish your first Shiny app.

As always, if you have a question or a suggestion related to the topic covered in this article, please add it as a comment so other readers can benefit from the discussion.

Get updates every time a new article is published by subscribing to this blog.

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

To leave a comment for the author, please follow the link and comment on their blog: R on Stats and R. R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job. Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

AdaOpt classification on MNIST handwritten digits (without preprocessing)

Fri, 05/29/2020 - 02:00

[This article was first published on T. Moudiki's Webpage - R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here) Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Last week on this blog, I presented AdaOpt for R, applied to iris dataset classification. And the week before that, I introduced AdaOpt for Python. AdaOpt is a novel probabilistic classifier, based on a mix of multivariable optimization and a nearest neighbors algorithm. More details about the algorithm can be found in this (short) paper. This week, we are going to train AdaOpt on the popular MNIST handwritten digits dataset without preprocessing, a.k.a neither convolution nor pooling.

Install mlsauce’s AdaOpt from the command line (for R, cf. below):

!pip install git+https://github.com/thierrymoudiki/mlsauce.git --upgrade

Import the packages that will be necessary for the demo:

from time import time from tqdm import tqdm import mlsauce as ms import numpy as np from sklearn.metrics import classification_report from sklearn.model_selection import train_test_split from sklearn.datasets import fetch_openml

Get MNIST handwritten digits data (notice that here, AdaOpt is trained on 5000 digits, and evaluated on 10000):

Z, t = fetch_openml('mnist_784', version=1, return_X_y=True) print(Z.shape) print(t.shape) t_ = np.asarray(t, dtype=int) np.random.seed(2395) train_samples = 5000 X_train, X_test, y_train, y_test = train_test_split( Z, t_, train_size=train_samples, test_size=10000)

Creation of an AdaOpt object:

obj = ms.AdaOpt(**{'eta': 0.13913503573317965, 'gamma': 0.1764634904063013, 'k': np.int(1.2154947405849463), 'learning_rate': 0.6161538857826013, 'n_iterations': np.int(245.55517115592275), 'reg_alpha': 0.29915416038957043, 'reg_lambda': 0.163411853029936, 'row_sample': 0.9477046112286693, 'tolerance': 0.05877163298305207})

Adjusting the AdaOpt object to the training set:

start = time() obj.fit(X_train, y_train) print(time()-start) 0.7025153636932373

Obtain the accuracy of AdaOpt on test set:

start = time() print(obj.score(X_test, y_test)) print(time()-start) 0.9372 9.997464656829834

Classification report including additional error metrics:

preds = obj.predict(X_test) print(classification_report(preds, y_test)) precision recall f1-score support 0 0.99 0.94 0.96 1018 1 0.99 0.95 0.97 1205 2 0.93 0.97 0.95 955 3 0.92 0.91 0.91 1064 4 0.91 0.95 0.93 882 5 0.89 0.95 0.92 838 6 0.97 0.96 0.96 974 7 0.95 0.95 0.95 1054 8 0.88 0.93 0.91 953 9 0.93 0.88 0.91 1057 accuracy 0.94 10000 macro avg 0.94 0.94 0.94 10000 weighted avg 0.94 0.94 0.94 10000

Confusion matrix, true label vs predicted label:

import matplotlib.pyplot as plt import seaborn as sns; sns.set() from sklearn.metrics import confusion_matrix mat = confusion_matrix(y_test, preds) sns.heatmap(mat.T, square=True, annot=True, fmt='d', cbar=False) plt.xlabel('true label') plt.ylabel('predicted label');

In R, the syntax is quite similar to what we’ve just demonstrated for Python. After having installed mlsauce, we’d have:

  • For the creation of an AdaOpt object:
library(mlsauce) # create AdaOpt object with default parameters obj <- mlsauce::AdaOpt() # print object attributes print(obj$get_params())
  • For fitting the AdaOpt object to the training set:
# fit AdaOpt to training set obj$fit(X_train, y_train)
  • For obtaining the accuracy of AdaOpt on test set:
# obtain accuracy on test set print(obj$score(X_test, y_test))

Note: I am currently looking for a gig. You can hire me on Malt or send me an email: thierry dot moudiki at pm dot me. I can do descriptive statistics, data preparation, feature engineering, model calibration, training and validation, and model outputs’ interpretation. I am fluent in Python, R, SQL, Microsoft Excel, Visual Basic (among others) and French. My résumé? Here!

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

To leave a comment for the author, please follow the link and comment on their blog: T. Moudiki's Webpage - R. R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job. Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

RStudio Shortcuts and Tips

Thu, 05/28/2020 - 17:31

[This article was first published on r – Appsilon Data Science | End­ to­ End Data Science Solutions, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here) Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Updated: May 2020 by Appsilon Data Science

How to Work Faster in RStudio

In this article we have compiled many of our favorite RStudio keyboard shortcuts, tips, and tricks to help increase your productivity while working with the RStudio IDE. We’ll also provide information about supplemental tools and techniques that are useful for data scientists that work with R.

Here’s what we cover:

*Note: Although we present both options in the gifs (PC and Mac shortcuts), we refer to PC shortcuts in the text. If you are a Mac user most of the shortcuts fall under this dependence:

Ctrl == ⌘ Command  &&  Alt == ⌥ Option

Keep in mind that in some cases Ctrl will also be the control key on Mac which can be confusing. You can always look up the proper shortcuts on RStudio’s website or within RStudio itself with:

Option+Shift+K (Alt+Shift+K)

How to Navigate RStudio

Depending on your work, you will use at least a few RStudio panes on a regular basis. Learning how to change focus between those utilized the most in a quick manner, and without using your pointing device, is a crucial skill for speeding up your workflow. It is achieved by pressing Ctrl (in this case also Control on Mac) and a number corresponding to the desired pane. By adding a Shift to the combination you can also toggle maximize pane for the one that you are switching to at the same time, very handy if you need a broader perspective. The only pane with a different access shortcut is the terminal (Shift+Alt+t). Preset windows: Help(3), History(4), Plots(5), or Environment(8). The two that you will be mostly jumping between frequently are Source Editor (1) and Console (2). Let’s now discuss how you can improve how you work in those.

How to Use Shortcuts in RStudio

Usually, the first thing you have to do when you start working is creating some code.  It is crucial to be aware that there are some features that can make it both easier and faster. Even basic tricks can have a great impact once you master using them, especially when combined together.

Code Completion

A suggestion list will pop up as you type or can be accessed manually by either pressing Tab or Ctrl + Space. You can adjust those settings in Global Options ->  Code -> Completion. To fill in the suggested phrase you have to press either Tab or Enter, pressing Ctrl + Space with auto-completion list open will close it. You can navigate through the suggestion list with arrows or just hover over the item before filling it in.

If the list is too long, try providing more letters to narrow it down. Beside auto-completing functions and variables, you can also insert snippets. We will get back to discuss those later. It’s good to be aware that auto-completion in R, as well as some search fields, supports fuzzy matching which means that you don’t really have to type all the letters, you can skip any of them as long as those typed are in order and identify what you are looking for. It is especially useful for long function names that you use often. Mastering this will allow you to type code much faster. Note that for fuzzy matching to work with auto-completion, suggestion popup must be already active. In case it doesn’t behave as you would expect, try tweaking it in code completion options.

Paths

If you need to type a path, you can use file path auto-complete which can be brought up by pressing the auto-completion shortcut (Tab or Ctrl + Space) from a pair of double or single quotes. 

By default it starts in your working directory, you can navigate from the root location like in shell console starting with “/”, or step up levels in the directory tree by stacking “../”

How to Execute and Format Code in RStudio

Executing code in your scripts can be very easy with the following shortcuts:

  • Ctrl + Enter – Will run current line and jump to the next one, or run selected part without jumping further.
  • Alt + Enter – Allows running code without moving the cursor to the next line if you want to run one line of code multiple times without selecting it.
  • There is also Ctrl + Alt + R to run whole script and
  • Ctrl + Alt + B/E combinations to run it from Beginning to the current line and from the current line to the End.

If you want to make your code look better quickly try using the following:

  • Ctrl + I to fix lines indentation
  • Ctrl + Shift + A for complete reformat of the selected part of a code

If you are not happy with the outcome of those you can always undo the changes. If you look for a more flexible solution for styling check out the styler package.

You may also benefit from remembering these super helpful shortcuts:

Moving lines of code up and down is easily achieved with an Alt + Up/Down combination; there is no need to cut and paste. You can move a single active line that way, or even whole selection. If you need to remove something Ctrl + D will delete current line/selection in no time.

Console History & History Pane

Everything that you passed to the console doesn’t have to be typed again. Accessing previously executed lines is as easy as navigating with the up arrow and down arrows to cycle between them in chronological order. If you want more visual feedback you can press Ctrl + Up arrow to get a list of last commands. If you combine it with typing in a part of the searched phrase you can narrow it down and easily find even complicated commands that are buried deep in the history. It will also override autocomplete popup if its active. Note: searching console history doesn’t support fuzzy matching so you have to be exact. If you want to clear your console use Ctrl+L, the command history will be preserved.

There is also a History pane(4) which stores executed commands. It allows search, easy selection of the ones you need (pick range with shift or gather individual positions with ctrl). Then easily insert them back into the console (Enter) or source file (Shift + Enter). The latter helps you avoid copying multiple commands from console to source manually which is troublesome due to line signs “>” that get copied as well and would otherwise have to be removed.

Dealing with Tabs

If you find yourself working on more than one tab in a source editor, you might find it helpful to switch between them with Ctrl+Tab and Ctrl+Shift+Tab combinations. It will allow you to jump to the next and previous tab respectively, there is another way to do this with Ctrl + F11/ F12 if it suits you better. It is also possible to jump to the first or last one by adding Shift to those. Last option that is quite interesting is navigating through tabs in the order they were accessed with Ctrl + F9/F10.

Navigate tabs history back and forward:

Jumping tabs:

Going through tabs back and forth:

Closing a current tab is easy with Ctrl + w. It is a much better choice than using those small “x” buttons on the right side of your tabs. If you get to the point where you have a huge amount of tabs open you can:

Close All  | Ctrl + Shift + w (+ Alt to keep the currently open one):

Or if you prefer to keep a lot of tabs open, you can search through your open tabs with Ctrl + Shift + . Be exact! No fuzzy matching here. This search can also be activated with “>>” icon on tabs bar.

The above shortcuts are also accessible from the File dropdown menu – this can come in handy while using RStudio browser session or simply if you forget them.

Code Inserting Shortcuts in RStudio Operators and sections

Let’s start with some shortcuts that are easy and very useful! If you want to speed up typing the most common operators you will definitely love these:

Alt + (-) for inserting assignment operator <-

and

Ctrl + Shift + M for a magrittr operator (aka pipe)  %>%

The nice thing about those two is the fact that spaces are inserted along with the operator.

Ctrl + Shift + R  is an easy way to create foldable comment sections in your code.

It’s worth it to know about the appliance of those sections for code externalization with knitr:read_chunk() function. If you want to know more about that check the details.

You can open/collapse those comment sections (as well as other kinds of sections e.g. inside curly braces {} or in Rmd) with

Alt + L – collapse

Alt + Shift + L – open

To collapse or open all sections, instead of the active one, just replace L with O on those shortcuts.

Function/Variable Extraction

If you have written a statement that you would like to convert into a function, don’t start from scratch. Select it and try Ctrl + Alt + X – shortcut for “extract into function”. You only need to provide the function name, all necessary inputs will be filled automatically. There is also a similar shortcut for a variable extraction available with Ctrl + Alt + V. Here you have an example of usage.

Renaming in Scope

If you have to change a variable name in multiple places but you are afraid that ‘find and replace’ will mess up your code, you should be aware that it is possible to rename in scope only. It is achieved by selecting the function or variable we want to change and press Ctrl + Shift + Alt + M.

It will select all occurrences in scope, you will have to just type a new name.

Yes, the shortcut is long, but it can be helpful. I find it to be easier to remember as an extension of the magrittr operator shortcut, so Pipe + Alt.

Using Code Snippets in RStudio

Are you tired of writing the same chunks of code over and over and having to remember all of the brackets and required parameters for functions? A great way to avoid writing so much, especially a common code, is to use code snippets.

What are code snippets?

Code snippets are pieces of re-usable boilerplate code.

Snippets are perfect for automatically inserting boilerplate code and avoiding the duplication of simple tasks. If you are looking for a way to speed up writing large parts of code when time is limited (e.g. live coding during a presentation), code snippets can be very useful.

How do I use code snippets?

Snippets can be recognized on your auto-completion list by a {snippet} tag.

Write the snippet name, press Shift + Tab, or Tab twice to use it. If your input is needed to complete it – just fill out positions with elements that are important. You can cycle through them with Tab.

Some of the snippets which are available by default include:

  • Declarations – lib, req, fun, ret, mat
  • Loops – for, while, switch
  • Conditionals – if, el, and ei for conditionals
  • Apply family functions – apply, lapply, sapply, etc.
  • S4 classes/methods definitions – sc, sm, and sg.
  • Shiny App template – shinyapp

And that’s just for R! There are also snippets for other languages and it is very easy to customize and define your own!

You might have noticed that I used insertOperatorsExample, a very simple custom snippet I created on the first gif showing operator shortcuts.

How to Create Custom Code Snippets in RStudio

For customizing or creating your own snippets use Edit Snippets button under Snippets section in

Tools -> Global Options ->  Code

To understand better how you can create your snippets let’s take a look at a matrix and function snippets declarations code as an example.

snippet mat matrix(${1:data}, nrow = ${2:rows}, ncol = ${3:cols}) snippet fun ${1:name} <- function(${2:variables}) { ${0} }

$ is used as a special character to denote where the cursor should jump after completing each section of a snippet. Inside the brackets, we have a field index (order in which the cursor will jump after pressing tab), 0 is used as the last field, and the text after a colon is used as information on what should be placed in that spot. In order to insert a literal “$” inside a snippet, it must be escaped as \$.

Snippets, besides generating code templates, can also run R code. It allows you to create dynamic snippets. By using `r expr` anywhere in your snippet your R code will be executed when the snippet is expanded, and the result inserted into the document.  

As an example take a look at the timestamp snippet declaration that is available by default.

 

snippet ts `r paste("#", date(), "------------------------------\n")`

 

It runs a paste function to insert a comment with a current date into code. Its execution resolves into something like this:

Equipped with this knowledge, let’s quickly create a custom snippet for inserting pipe, but instead of a space we will have a new line right after it:

snippet pipe `r paste(" %>%\n")`

Using Search in RStudio

So, if you don’t have a lot of code yet, you have the tools to quickly generate it.

The next question is then, how to find things that you are looking for quickly.

There are several available options for search that you can use.

Go to file function Ctrl + (.) allows you to quickly search your project for a file or function and jump directly to it. It supports fuzzy matching so it’s easy to find what you need.

If you need more robustness, use Ctrl + Shift + F to call the Find in Files window which allows you to search through files in a directory that you can specify (even outside the project). You can jump between elements you found by double-clicking them in the Find in Files window which opens next to the console.

If you want to search only inside an active source tab you can use the find bar with Ctrl + F which allows several additional options like replacing texts and searching inside a selected part of code only. It can also be useful for multiple cursor editing – see the section below.

We have already covered more methods in part 1 – search within console history and searching through your tabs. You can refer to it if you want to get more details on those.

How to Edit With Multiple Cursors in RStudio

In RStudio, it is possible to write and edit in more than one place at a time. There are a couple of ways to create multiple cursors. You can press Ctrl + Alt + (Up/Down) to create a new cursor in the direction in which you press. If you want to quickly select more lines use Alt and drag with the mouse to create a rectangular selection, or Alt + Shift and click to create a rectangular selection from the current cursor position to the clicked position. 

This way of editing may look intimidating  at first, and may not be easy to operate initially. However, knowing it is there can save you time when you encounter repetitive multi-line tasks. Try playing around with using multiple cursors and see how it feels.

Below you can see an example of how using multiple cursors might look:

Another way is to use the Find/Replace toolbar from the previous paragraph to place multiple cursors. Just search a phrase and press the All button to select all matching items. It will create a cursor for each matching phrase. If you don’t want to search throughout the entire file you can also limit the area for a searched phrase by selecting a part you are interested in and checking the box with the “In selection” option.

How to Use R Addins

R Addins are a broad topic that could fill a blog post on their own. We just want to give you a brief introduction to this concept.

What are R Addins?

R Addins make it possible to execute R functions in an interactive way right from within the RStudio. Addins are distributed as R packages and can be launched either through the Addins dropdown on the toolbar or through assigned keyboard shortcuts.

We can distinguish two types of addins. Those are text macros and Shiny gadgets. Text macros insert text into the console / source pane or can transform text within the source pane.  Shiny gadgets are interactive Shiny applications launched inside RStudio which may also perform transformations like text macros, but their possibilities are much more extensive.

Test Out Some Addins

To quickly try addins you can install some examples from RStudio Github.

devtools::install_github(“rstudio/addinexamples”, type = “source”)

It will give you a text macro for inserting %in% operator as well as three shiny gadgets for a small sneak peek of what’s possible.

As I mentioned, you can assign a keyboard shortcut to an addin the same way as you do it with regular shortcuts.  You can find them easily by filtering “Addin” (all of them have their scope set like that).

Make Your Own R Addins

If you want to check out more of them try the addinslist package by Dean Attali.

If you would you like to create your own addins, you can find more information on how to do it here.

Bonus RStudio Tips  Tip: Use vim Settings

Keep your hands in one place! It’s a powerful method for programmers. Examples: dd to delete the whole line, 7dd to delete 7 lines, navigate, macros, jumping around whole words instead of letters.

Tip: Use .RProfile

When you develop an R package, it’s useful to load frequently used dev packages in the .RProfile file (placed in the main package directory). For example:

library(devtools) library(testthat)

This way you can use functions like `test(), check()` without specific package reference or loading the packages on your own.

Tip: Increase Security with .Renviron Do not keep credentials inside your project code. A good practice is to keep them “gitignored” inside the .Renviron file: db_password=MySecretPassword

And use a variable in the code with `Sys.getenv(“db_password”)`.

Tip: Use Docker

If you want to keep a consistent environment for your project development within a team, use a dockerized version of RStudio (https://hub.docker.com/r/rocker/rstudio/).

Tell Us About Your RStudio Tips

There is obviously plenty more to explore on the topic of improving your RStudio workflow, and we hope you are inspired to pursue further exploration and experiments on your own. If you end up with something useful as a result – be it a code snippet, an addin or just something useful that we did not mention her, why not share it as a comment below? We’ll be updating this page regularly with more RStudio tips.

Further Reading

If you’re looking for more R tutorials, try these out:

Article RStudio Shortcuts and Tips comes from Appsilon Data Science | End­ to­ End Data Science Solutions.

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

To leave a comment for the author, please follow the link and comment on their blog: r – Appsilon Data Science | End­ to­ End Data Science Solutions. R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job. Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

How to Safely Remove a Dynamic Shiny Module

Thu, 05/28/2020 - 13:43

[This article was first published on r – Appsilon Data Science | End­ to­ End Data Science Solutions, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here) Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Despite their advantages, Dynamic Shiny Modules can destabilize the Shiny environment and cause its reactive graph to be rendered multiple times. In this blogpost, I present how to remove deleted module leftovers and make sure that your Shiny graph observers are rendered just once.

While working with advanced Shiny applications, you have most likely encountered the need for using Shiny Modules. Shiny Modules allow you to modularize the code, reuse it to create multiple components using single functions and prevent the code’s duplication.

Perhaps the best feature of Shiny Modules is the ability to create dynamic app elements. A great implementation of this can be found here. This particular example provides convenient logic for adding and removing variables and their values in a reactive manner.

Implementing Shiny Modules does come with certain challenges that can affect the stability of your Shiny environment. In this article, I will show you how to overcome them.

Removing the remnants of an obsolete module

Removing a module can have a destabilizing impact on your Shiny app environment. To illustrate this problem, let’s consider this simple application:

The app allows the user to create (and remove) a new module that counts the number of clicks of the button placed inside of the module. The number of clicks is also displayed outside the module in order to see the internal module value after that module is removed.

The expectation is that removing the module would remove its internal objects including input values. Unfortunately, this is not the case:

In fact, removing the module only affects the UI part while the module’s reactive values are still in the Shiny session environment and are rewritten right after a new module is called. This becomes particularly problematic when the module stores large inputs. Adding a new module aggregates the memory used by the application and can quickly exhaust all the available RAM on the server hosting your application. This issue can be resolved by introducing the remove_shiny_inputs function, as explained here. The function allows you to remove input values from an unused Shiny module.

In our implementation, making use of the function requires a simple modification of the remove_module event:

observeEvent(input$remove_module, {     removeUI(selector = "#module_content")     shinyjs::disable("remove_module")     shinyjs::enable("add_module")     remove_shiny_inputs("my_module", input)     local_clicks(input[["my_module-local_counter"]])   })

 

Removing internal observers that have registered multiple times

The second issue has likely contributed to hair being ripping out of many Shiny programmers’ heads.

Observe events, just like reactive values in the example above, are not removed when a module is deleted. In this case, the issue is even more serious –  the obsolete observer is replicated rather than overwritten. As a result, the observer is triggered as many times as the new module (with the same id) was created.

In our example, adding a simple print function inside an observeEvent shows the essence of this issue:

observeEvent(input$local_counter, {     print(paste("Clicked", input$local_counter))     local_clicks(input$local_counter)   }, ignoreNULL = FALSE, ignoreInit = TRUE)

 

This behavior may cause your application to slow down significantly within just a few minutes of use.

The fastest solution to this problem is a workaround which requires the developer to create new modules with unique identifiers. This way, each new module creates a unique observer and the previous observers are not triggered anymore.

The proper solution, not as commonly known, is offered directly by the Shiny package and does not require any hacky workarounds. We begin by assigning the observer to the selected variable:

my_observer <- observeEvent(...)

The my_observer object now allows us to use multiple, helpful methods related to the created observer. One of them, destroy(), provides for correctly removing a Shiny observer from its environment, with:

my_observer$destroy()

We have two options to apply this solution to our example.

The first approach calls for assigning the module’s observer to a variable that is accessible from within the Shiny server directly. For instance, we can use reactiveVal that is passed to the module and designed to store observers. 

The second approach makes use of the session$userData object (see Marcin’s related blog post).

We decided to use the second approach, so we assigned an observer to the `session$userData$my_observer` variable:

session$userData$clicks_observer <- observeEvent(...)

Then, we modified the remove_module event by adding a destroy action on the variable we just created:

observeEvent(input$remove_module, {     removeUI(selector = "#module_content")     shinyjs::disable("remove_module")     shinyjs::enable("add_module")     remove_shiny_inputs("my_module", input)     local_clicks(input[["my_module-local_counter"]])     session$userData$clicks_observer$destroy()   })

The result met our expectations:

The final application code is available here.

Conclusion

Shiny offers great functionalities for creating advanced, interactive applications. 

Whilst this powerful package is easy to use, we still need to properly manage the application’s low-level objects to ensure optimal performance. If you have any examples of how you struggled with solving other less common Shiny challenges, please share in the comment section below!

Follow Us for More

Article How to Safely Remove a Dynamic Shiny Module comes from Appsilon Data Science | End­ to­ End Data Science Solutions.

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

To leave a comment for the author, please follow the link and comment on their blog: r – Appsilon Data Science | End­ to­ End Data Science Solutions. R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job. Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Version 0.9.1 of NIMBLE released

Thu, 05/28/2020 - 03:31

[This article was first published on R – NIMBLE, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here) Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

We’ve released the newest version of NIMBLE on CRAN and on our website. NIMBLE is a system for building and sharing analysis methods for statistical models, especially for hierarchical models and computationally-intensive methods (such as MCMC and SMC). Version 0.9.1 is primarily a bug fix release but also provides some minor improvements in functionality.

Users of NIMBLE in R 4.0 on Windows MUST upgrade to this release for NIMBLE to work.

New features and bug fixes include:

  • switched to use of system2() from system() to avoid an issue on Windows in R 4.0;
  • modified various adaptive MCMC samplers so the exponent controlling the scale decay of the adaptation is adjustable by user;
  • allowed pmin() and pmax() to be used in models;
  • improved handling of NA values in the dCRP distribution; and
  • improved handling of cases where indexing goes beyond the extent of a variable in expandNodeNames() and related queries of model structure.

Please see the release notes on our website for more details.

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

To leave a comment for the author, please follow the link and comment on their blog: R – NIMBLE. R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job. Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

April 2020: “Top 40” New CRAN Packages

Thu, 05/28/2020 - 02:00

[This article was first published on R Views, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here) Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

One hundred forty-eight new packages made it to CRAN in April. Here are my “Top 40” picks in nine categories: Computational Methods, Data, Machine Learning, Medicine, Science, Statistics, Time Series, Utilities, and Visualization.

Computational Methods

JuliaConnectoR v0.6.0: Allows users to import Julia packages and functions in such a way that they can be called directly as as R functions.

RcppBigIntAlgos: v0.2.2: Implements the multiple polynomial quadratic sieve (MPQS) algorithm for factoring large integers and a vectorized factoring function that returns the complete factorization of an integer. See Pomerance (1984) and Silverman (1987) for background and this Microsoft post for an explanation.

smoothedLasso v1.0: Implements the smoothed LASSO regression using the method of Nesterov (2005).

Data

daqape v0.3.0: Provides a variety of methods to identify data quality issues in process-oriented data. There is an Introduction.

DSOpal v1.1.0: is the DataShield implementation of Opal, the data integration application for biobanks by OBiBa, open source software for epidemiology.

epuR v0.1: Provides functions to collect data from the the economic policy uncertainty website. See the vignette.

hystReet v0.0.1: Implements an API wrapper for the Hystreet project which provides pedestrian counts for various cities in Germany. See the vignette to get started.

rGEDI v0.1.7: Provides a set of tools for downloading, reading, visualizing and processing GEDI Level1B, Level2A and Level2B data. see the vignette to get started.

Machine Learning

catsim v0.2.1: Computes structural similarity metrics for binary and categorical 2D and 3D images including Cohen’s kappa, Rand index, adjusted Rand index, Jaccard index, Dice index, normalized mutual information, or adjusted mutual information. See Thompson & Maitra (2020) for background and the vignette for an introduction.

klic v1.0.2: Implements a kernel learning integrative clustering algorithm which allows combining multiple kernels, each representing a different measure of the similarity between a set of observations. There is an Introduction.

MIDASwrappeR V0.5.1: Provides a wrapper for the C++ implementation of the MIDAS algorithm described in Bhatia et al. (2020) for graph like data. See the Introduction.

VUROCS v1.0: Calculates the volume under the ROC surface and its (co)variance for ordered multi-class ROC analysis as well as certain bivariate ordinal measures of association.

WeightSVM v1.7-4: Provides functions for subject/instance weighted support vector machines (SVM). It uses a modified version of libsvm and is compatible with e1071 package. Look here for some background.

Medicine

covid19.analytics v1.1: Provides functions to load and analyze COVID-19 data from the Johns Hopkins University CSSE data repository. It includes functions to visualize cases for specific geographical locations, generate interactive visualizations and produce a SIR model. See the vignette for an introduction.

covid19france Provides functions to import, clean and update French COVID-19 data from opencovid19-fr.

interactionR v0.1.1: Produces a publication-ready table that includes all effect estimates necessary for full reporting effect modification and interaction analysis as recommended by Knol & Vanderweele (2012), estimates confidence interval additive interaction measures using the delta method Hosmer & Lemeshow (1992), the variance recovery method Zou (2008), or percentile bootstrapping Assmann et al. (1996).

RCT v1.0.2: Provides tools to facilitate the process of designing and evaluating randomized control trials, including methods to handle misfits, power calculations, balance regressions, and more. For background see Athey et al. (2017). The vignette describes how to use the package.

Science

rasterdiv: Provides functions to calculate indices of diversity on numerical matrices based on information theory. The rationale behind the package is described in Rocchini et al. (2017). See the vignette for an extended example.

SSHAARP v1.0.0: Processes amino acid alignments from the IPD-IMGT/HLA database to identify user-defined amino acid residue motifs shared across HLA alleles, calculate the frequencies of those motifs, and generate global frequency heat maps that illustrate the distribution of each user-defined map around the globe. See the vignette for an introduction.

Statistics

BayesSampling v1.0.0: Provides functions for applying the Bayes Linear approach to finite populations with the simple random sampling, stratified simple random sampling designs, and to the ratio estimator. See Gonçalves et al. (2014) for background and the vignettes: BLE_Ratio, BLE_Reg, BLE_SRS, BLE_SSRS, and BayesSampling.

cort v0.3.1: Provides S4 classes and methods to fit several copula models including empirical checkerboard copula Cuberos et. al (2019) and the Copula Recursive Tree algorithm proposed by Laverny et. al (2020). There are vignettes on the Empirical Checkerboard Copula, the Copula Recursive Tree, the Empirical Checkerboard Copula with known margins, and the convex mixture of m-randomized checkerboards.

ExpertChoice v0.2.0: Implements tools for designing efficient discrete choice experiments. See Street et. al (2005) for some background. There is an Practical Introduction and a vignette with some theory.

genscore v1.0.2: Implements the generalized score matching estimator from Yu et al. (2019) for non-negative graphical models with truncated distributions, and the estimator of Lin et al. (2016) for untruncated Gaussian graphical models. See the vignette.

hmma v1.0.0: Provides functions to fit Bayesian asymmetric hidden Markov models. HMM-As are similar to regular HMMs, See Bueno et al. (2017) for background and the vignette for and introduction.

lmeInfo v0.1.1: Provides analytic derivatives and information matrices for fitted linear mixed effects models and generalized least squares models estimated using lme() and gls() as well as functions for estimating the sampling variance-covariance of variance component parameters and standardized mean difference effect sizes. See Pustejovsky et al. (2014) and the vignette.

metapower v0.1.0: Implements a tool for computing meta-analytic statistical power for main effects, tests of homogeneity, and categorical moderator models. Have a look at Pigott (2012), Hedges & Pigott (2004), or Borenstein et al. (2009) for background and the vignett to get started.

sasLM v0.1.3: Implements the SAS procedures for linear models: GLM, REG, ANOVA. The sasLM functions produce the same results as the corresponding SAS procedures for nested and complex models.

sdglinkage 0.1.0: Provides a tool for synthetic data generation that can be used for linkage method development. There is an Overview and vignettes on Real and Synthetic Identifiers, Gold Standard File and Linkage Files, Synthetic Data Generation and Evaluation.

starm v0.1.0: Estimates the coefficients of the two-time centered autologistic regression model described in Gegout-Petit et al. (2019). The vignette describes the theory.

Time Series

ConsReg v0.1.0: Provides functions to fit regression and generalized linear models with autoregressive moving-average (ARMA) errors for time series data. There is a vignette.

simITS v0.1.1: Implements the method of Miratrix (2020) to create prediction intervals for post-policy outcomes in interrupted time series. It provides methods to fit ITS models with lagged outcomes and variables to account for temporal dependencies and then to simulate a set of plausible counterfactual post-policy series to compare to the observed post-policy series. See the vignette.

Utilities

dreamerr v1.1.0: Implements tools to facilitate package development by providing a flexible way to check the arguments passed to functions. See the vignette for details.

flair v0.0.2: Facilitates formatting and highlighting of R source code in a R Markdown based presentation. The vignette shows how.

J4R v1.0.7: Makes it possible to create Java objects and to execute Java methods from the R environment. The JVM is handled by a gateway server which relies on the Java library j4r.jar.

waldo v0.1.0: Provides functions to compare complex R objects and reveal the key differences. It was designed primarily for use in testing packages.

Visualization

anglr v0.6.0: Extends rgl conversion and visualization functions to mesh3d to give direct access to generic 3D tools and provide a full suite of mesh-creation and 3D plotting functions. See the vignette

brickr v0.3.4: Uses tidyverse functions to generate digital LEGO models and convert image files into 2D and 3D LEGO mosaics. There are vignettes for building mosaics and for generating models from mosaics, programs, tables, and by piece type.

survCurve v1.0: Provides functions to enhance plots created with the survival and mstate packages. See the vignette for examples.

textplot v0.1.2: Provides functions to visualize complex relations in texts by displaying text co-occurrence networks, text correlation networks, dependency relationships and text clustering. The vignette provides examples.

_____='https://rviews.rstudio.com/2020/05/28/april-2020-top-40-new-cran-packages/';

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

To leave a comment for the author, please follow the link and comment on their blog: R Views. R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job. Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Superior svg graphics rendering in R, and why it matters

Thu, 05/28/2020 - 02:00

[This article was first published on rOpenSci - open tools for open science, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here) Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

This week we released a major new version of the rsvg package on CRAN. This package provides R bindings to librsvg2 which is a powerful system library for rendering svg images into bitmaps that can be displayed, or use for further processing in for example the magick package.

The biggest change in this release is the R package on Windows and MacOS now includes the latest librsvg 2.48.4. This is a major upgrade; the librsvg2 rendering engine has been completely rewritten in Rust 1 using components from Mozilla Servo. This has resulted in major improvements in quality and performance, and we have gained full support for css styling.

In this post we showcase how it works, and why you should use svg for R graphics.

What is rendering

A figure in svg format is stored as xml data containing a vector representation of a drawing, such as a sequence of lines, shapes, text, with their relative position, size, color, attributes, etc. The benefit of svg is that it can be resized without loss of quality. And because it is just xml, the shapes and text can be manipulated using standard xml/css tools, such as a browser or the xml2 package.

For an image to be displayed on screen, printed in a document, or loaded in editing software, it has to be rendered into a bitmap. A bitmap is a fixed a array of w × h pixels with color values. Bitmap formats such as png, jpeg, or tiff all store the same pixel data, using different compression methods.

The rsvg package renders svg into a bitmap image with the format and size of your choice, directly in R, and without loss of quality:

# Example SVG image svgdata <- ' circle { fill: gold; stroke: maroon; stroke-width: 12px; } text { fill: navy; font-size: 2em; font-family: "Times, Serif" } I love SVG! ' # Render with rsvg into png writeLines(svgdata, 'image.svg') rsvg::rsvg_png('image.svg', 'image.png', width = 800)

Instead of rendering to a png/jpeg file, you can also render the svg into raw bitmap data (called raw vectors in R), which you can read with for example magick or any other imaging tool:

# Or: convert into raw bitmap data bitmap <- rsvg_raw('image.svg', width = 600) str(bitmap) ##> raw [1:4, 1:600, 1:600] # Read the bitmap in magick image <- magick::image_read(bitmap)

circle { fill: gold; stroke: maroon; stroke-width: 12px; } text { fill: navy; font-size: 2em; font-family: "Times, Serif" }


I love SVG!

In magick, you can easily do all sorts of post-processing and conversion of the bitmap image. The magick package has a convenient wrapper function read_image_svg that does exactly this: it uses rsvg to render the image and then reads the bitmap data as a magick image.

Using SVG for R graphics

The best way to create svg files from graphics in R is using the svglite package. Try running the code below and then have a look at mtcars.svg in a text editor.

library(svglite) library(ggplot2) # SVG sizes are in inches, not pixels res <- 144 svglite("mtcars.svg", width = 1080/res, height = 720/res) ggplot(mtcars, aes(mpg, disp, colour = hp)) + geom_point() + geom_smooth() dev.off()


var anim = new Vivus('my-svg', { duration: 200 });

Again we can use rsvg directly or via magick to convert this to a bitmap image:

# Render the svg into a png image with rsvg via magick img <- magick::image_read_svg("mtcars.svg", width = 1080) magick::image_write(img, 'mtcars.png')

This generates a png image of with 1080x720px, without loss of quality.

Using CSS for R graphics?

One feature in librsvg that has improved a lot from servo is support for CSS. As can be seen in the example above, svg allows for specifying global styling via CSS rules. In the browser, CSS and JavaScript can also be used to add interactivity and animation to SVG.

With the latest version of librsvg it is now also possible to specify the CSS stylesheet from an external file, rather than inlining it in the svg itself. For example you can have a fig.svg file like this:

viewBox="0 0 1200 250" xmlns="http://www.w3.org/2000/svg"> cx="200" cy="125" r="120" /> x="140" y="40" transform="rotate(30 10,20)">Separate CSS!

And a separate style.css file like this:

circle { fill: gold; stroke: maroon; stroke-width: 12px; } text { fill: navy; font-size: 2em; font-family: "Times, Serif" }

Which you would render in R like this to get the same figure as above.

rsvg_png('fig.svg', css = 'style.css', file = 'output.png')

So is this useful? Maybe, I’m not sure. The R graphics system is pretty old, it currently doesn’t have any notion of separating style from layout like we do in modern webpages. It could be useful to think about which styling properties of graphics could be decoupled from the figure structure. D3 goes even further and defers almost all styling to CSS:

D3’s vocabulary of graphical marks comes directly from web standards: HTML, SVG, and CSS. For example, you can create SVG elements using D3 and style them with external stylesheets. You can use composite filter effects, dashed strokes and clipping. If browser vendors introduce new features tomorrow, you’ll be able to use them immediately—no toolkit update required. And, if you decide in the future to use a toolkit other than D3, you can take your knowledge of standards with you!

Maybe not everything generalizes directly to R, but some aspects do. One could imagine it would be useful to specify fonts and color palettes in the rendering phase, rather than hardcoding these in the graphic. Or that the same svg file would work in dark-mode, or with accessibility styling. For this to work, the graphics device would have to add support for tagging shapes and textboxes with a class or id, such that these can be selected using xpath, css or javascript.

I think that if we can untangle these things in the graphics device, it may be possible to produce R graphics as objects that can both be rendered into bitmaps for printing, but at the same time allow for interactivity and animation in the browser. There are a lot of JavaScript libraries to enhance svg graphics on a webpage 2, and with the rsvg package you can use exactly the same svg file to render a high quality image for in your paper.

  1. For other uses of Rust in R, see my presentation at Erum2018: slides, recording

  2. Did you notice one was used in this post? Try reloading the page, and look at the mtcars plot.

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

To leave a comment for the author, please follow the link and comment on their blog: rOpenSci - open tools for open science. R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job. Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Correlation coefficient and correlation test in R

Thu, 05/28/2020 - 02:00

[This article was first published on R on Stats and R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here) Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Introduction

Correlations between variables play an important role in a descriptive analysis. A correlation measures the relationship between two variables, that is, how they are linked to each other. In this sense, a correlation allows to know which variables evolve in the same direction, which ones evolve in the opposite direction, and which ones are independent.

In this article, I show how to compute correlation coefficients, how to perform correlation tests and how to visualize relationships between variables in R.

Correlation is usually computed on two quantitative variables. See the Chi-square test of independence if you need to study the relationship between two qualitative variables.

Data

In this article, we use the mtcars dataset (loaded by default in R):

# display first 5 observations head(mtcars, 5) ## mpg cyl disp hp drat wt qsec vs am gear carb ## Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4 ## Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4 ## Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1 ## Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1 ## Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2

The variables vs and am are categorical variables, so they are removed for this article:

# remove vs and am variables library(tidyverse) dat <- mtcars %>% select(-vs, -am) # display 5 first obs. of new dataset head(dat, 5) ## mpg cyl disp hp drat wt qsec gear carb ## Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 4 4 ## Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 4 4 ## Datsun 710 22.8 4 108 93 3.85 2.320 18.61 4 1 ## Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 3 1 ## Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 3 2 Correlation coefficient Between two variables

The correlation between 2 variables is found with the cor() function. Suppose we want to compute the correlation between horsepower (hp) and miles per gallon (mpg):

# Pearson correlation between 2 variables cor(dat$hp, dat$mpg) ## [1] -0.7761684

Note that the correlation between variables x and y is equal to the correlation between variables y and x so the order of the variables in the cor() function does not matter.

The Pearson correlation is computed by default with the cor() function. If you want to compute the Spearman correlation, add the argument method = "spearman" to the cor() function:

# Spearman correlation between 2 variables cor(dat$hp, dat$mpg, method = "spearman" ) ## [1] -0.8946646

While Pearson correlation is often used for quantitative continuous variables, Spearman correlation (which is based on the ranked values for each variable rather than on the raw data) is often used to evaluate relationships involving ordinal variables. Run ?cor for more information about the different methods available in the cor() function.

Correlation matrix: correlations for all variables

Suppose now that we want to compute correlations for several pairs of variables. We can easily do so for all possible pairs of variables in the dataset, again with the cor() function:

# correlation for all variables round(cor(dat), digits = 2 # rounded to 2 decimals ) ## mpg cyl disp hp drat wt qsec gear carb ## mpg 1.00 -0.85 -0.85 -0.78 0.68 -0.87 0.42 0.48 -0.55 ## cyl -0.85 1.00 0.90 0.83 -0.70 0.78 -0.59 -0.49 0.53 ## disp -0.85 0.90 1.00 0.79 -0.71 0.89 -0.43 -0.56 0.39 ## hp -0.78 0.83 0.79 1.00 -0.45 0.66 -0.71 -0.13 0.75 ## drat 0.68 -0.70 -0.71 -0.45 1.00 -0.71 0.09 0.70 -0.09 ## wt -0.87 0.78 0.89 0.66 -0.71 1.00 -0.17 -0.58 0.43 ## qsec 0.42 -0.59 -0.43 -0.71 0.09 -0.17 1.00 -0.21 -0.66 ## gear 0.48 -0.49 -0.56 -0.13 0.70 -0.58 -0.21 1.00 0.27 ## carb -0.55 0.53 0.39 0.75 -0.09 0.43 -0.66 0.27 1.00

This correlation matrix gives an overview of the correlations for all combinations of two variables.

Interpretation of a correlation coefficient

First of all, correlation ranges from -1 to 1.

On the one hand, a negative correlation implies that the two variables under consideration vary in opposite directions, that is, if a variable increases the other decreases and vice versa. On the other hand, a positive correlation implies that the two variables under consideration vary in the same direction, i.e., if a variable increases the other one increases and if one decreases the other one decreases as well. Last but not least, a correlation close to 0 indicates that the two variables are independent.

As an illustration, the Pearson correlation between horsepower (hp) and miles per gallon (mpg) found above is -0.78, meaning that the 2 variables vary in opposite direction. This makes sense, cars with more horsepower tend to consume more fuel (and thus have a lower millage par gallon). On the contrary, from the correlation matrix we see that the correlation between miles per gallon (mpg) and the time to drive 1/4 of a mile (qsec) is 0.42, meaning that fast cars (low qsec) tend to have a worse millage per gallon (low mpg). This again make sense as fast cars tend to consume more fuel.

The correlation matrix is however not easily interpretable, especially when the dataset is composed of many variables. In the following sections, we present some alternatives to the correlation matrix.

Visualizations A scatterplot for 2 variables

A good way to visualize a correlation between 2 variables is to draw a scatterplot of the two variables of interest. Suppose we want to examine the relationship between horsepower (hp) and miles per gallon (mpg):

# scatterplot library(ggplot2) ggplot(dat) + aes(x = hp, y = mpg) + geom_point(colour = "#0c4c8a") + theme_minimal()

If you are unfamiliar with the {ggplot2} package, you can draw the scatterplot using the plot() function from R base graphics:

plot(dat$hp, dat$mpg)

or use the esquisse addin to easily draw plots using the {ggplot2} package.

Scatterplots for several pairs of variables

Suppose that instead of visualizing the relationship between only 2 variables, we want to visualize the relationship for several pairs of variables. This is possible thanks to the pair() function. For this illustration, we focus only on miles per gallon (mpg), horsepower (hp) and weight (wt):

# multiple scatterplots pairs(dat[, c(1, 4, 6)])

The figure indicates that weight (wt) and horsepower (hp) are positively correlated, whereas miles per gallon (mpg) seems to be negatively correlated with horsepower (hp) and weight (wt).

Another simple correlation matrix

This version of the correlation matrix presents the correlation coefficients in a slightly more readable way, i.e., by coloring the coefficients based on their sign. Applied to our dataset, we have:

# improved correlation matrix library(corrplot) corrplot(cor(dat), method = "number", type = "upper" # show only upper side )

Correlation test For 2 variables

Unlike a correlation matrix which indicates correlation coefficients between pairs of variables, the correlation test is used to test whether the correlation (denoted \(\rho\)) between 2 variables is significantly different from 0 or not.

Actually, a correlation coefficient different from 0 does not mean that the correlation is significantly different from 0. This needs to be tested with a correlation test. The null and alternative hypothesis for the correlation test are as follows:

  • \(H_0\): \(\rho = 0\)
  • \(H_1\): \(\rho \ne 0\)

Suppose that we want to test whether the rear axle ratio (drat) is correlated with the time to drive a quarter of a mile (qsec):

# Pearson correlation test test <- cor.test(dat$drat, dat$qsec) test ## ## Pearson's product-moment correlation ## ## data: dat$drat and dat$qsec ## t = 0.50164, df = 30, p-value = 0.6196 ## alternative hypothesis: true correlation is not equal to 0 ## 95 percent confidence interval: ## -0.265947 0.426340 ## sample estimates: ## cor ## 0.09120476

The p-value of the correlation test between these 2 variables is 0.62. At the 5% significance level, we do not reject the null hypothesis of no correlation. We therefore conclude that we do not reject the hypothesis that there is no linear relationship between the 2 variables.

This test proves that even if the correlation coefficient is different from 0 (the correlation is 0.09), it is actually not significantly different from 0.

Note that the p-value of a correlation test is based on the correlation coefficient and the sample size. The larger the sample size and the more extreme the correlation (closer to -1 or 1), the more likely the null hypothesis of no correlation will be rejected. With a small sample size, it is thus possible to obtain a relatively large correlation (based on the correlation coefficient), but still find a correlation not significantly different from 0 (based on the correlation test). For this reason, it is recommended to always perform a correlation test before interpreting a correlation coefficient to avoid flawed conclusions.

For several pairs of variables

Similar to the correlation matrix used to compute correlation for several pairs of variables, the rcorr() function (from the {Hmisc} package) allows to compute p-values of the correlation test for several pairs of variables at once. Applied to our dataset, we have:

# correlation tests for whole dataset library(Hmisc) res <- rcorr(as.matrix(dat)) # rcorr() accepts matrices only # display p-values (rounded to 3 decimals) round(res$P, 3) ## mpg cyl disp hp drat wt qsec gear carb ## mpg NA 0.000 0.000 0.000 0.000 0.000 0.017 0.005 0.001 ## cyl 0.000 NA 0.000 0.000 0.000 0.000 0.000 0.004 0.002 ## disp 0.000 0.000 NA 0.000 0.000 0.000 0.013 0.001 0.025 ## hp 0.000 0.000 0.000 NA 0.010 0.000 0.000 0.493 0.000 ## drat 0.000 0.000 0.000 0.010 NA 0.000 0.620 0.000 0.621 ## wt 0.000 0.000 0.000 0.000 0.000 NA 0.339 0.000 0.015 ## qsec 0.017 0.000 0.013 0.000 0.620 0.339 NA 0.243 0.000 ## gear 0.005 0.004 0.001 0.493 0.000 0.000 0.243 NA 0.129 ## carb 0.001 0.002 0.025 0.000 0.621 0.015 0.000 0.129 NA

Only correlations with p-values smaller than the significance level (usually \(\alpha = 0.05\)) should be interpreted.

Combination of correlation coefficients and correlation tests

Now that we covered the concepts of correlation coefficients and correlation tests, let see if it is possible to combine these two concepts in one single visualization.

Ideally, we would like to have a concise overview of correlations between all possible pairs of variables present in a dataset, with a clear distinction for correlations that are significantly different from 0.

The figure below, known as a correlogram and adapted from the corrplot() function, does precisely this:

corrplot2 <- function(data, method = "pearson", sig.level = 0.05, order = "original", diag = FALSE, type = "upper", tl.srt = 90, number.font = 1, number.cex = 1, mar = c(0, 0, 0, 0)) { library(corrplot) data_incomplete <- data data <- data[complete.cases(data), ] mat <- cor(data, method = method) cor.mtest <- function(mat, method) { mat <- as.matrix(mat) n <- ncol(mat) p.mat <- matrix(NA, n, n) diag(p.mat) <- 0 for (i in 1:(n - 1)) { for (j in (i + 1):n) { tmp <- cor.test(mat[, i], mat[, j], method = method) p.mat[i, j] <- p.mat[j, i] <- tmp$p.value } } colnames(p.mat) <- rownames(p.mat) <- colnames(mat) p.mat } p.mat <- cor.mtest(data, method = method) col <- colorRampPalette(c("#BB4444", "#EE9988", "#FFFFFF", "#77AADD", "#4477AA")) corrplot(mat, method = "color", col = col(200), number.font = number.font, mar = mar, number.cex = number.cex, type = type, order = order, addCoef.col = "black", # add correlation coefficient tl.col = "black", tl.srt = tl.srt, # rotation of text labels # combine with significance level p.mat = p.mat, sig.level = sig.level, insig = "blank", # hide correlation coefficiens on the diagonal diag = diag ) } corrplot2( data = dat, method = "pearson", sig.level = 0.05, order = "original", diag = FALSE, type = "upper", tl.srt = 75 )

The correlogram shows correlation coefficients for all pairs of variables (with more intense colors for more extreme correlations), and correlations not significantly different from 0 are represented by a white box.

To learn more about this plot and the code used, I invite you to read the article entitled “Correlogram in R: how to highlight the most correlated variables in a dataset”.

Thanks for reading. I hope this article helped you to compute correlations and perform correlation tests in R.

As always, if you have a question or a suggestion related to the topic covered in this article, please add it as a comment so other readers can benefit from the discussion.

Get updates every time a new article is published by subscribing to this blog.

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

To leave a comment for the author, please follow the link and comment on their blog: R on Stats and R. R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job. Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Critique of “Projecting the transmission dynamics of SARS-CoV-2 through the postpandemic period” — Part 1: Reproducing the results

Wed, 05/27/2020 - 20:06

[This article was first published on R Programming – Radford Neal's blog, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here) Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

I’ve been looking at the following paper, by researchers at Harvard’s school of public health, which was recently published in Science:

Kissler, Tedijanto, Goldstein, Grad, and Lipsitch (2020) Projecting the transmission dynamics of SARS-CoV-2 through the postpandemic period (also available here, with supplemental materials here).

This is one of the papers referenced in my recent post on seasonality of COVID-19. The paper does several things that seem interesting:

  • It looks at past incidence of “common cold” coronaviruses, estimating the viruses’ reproduction numbers (R) over time, and from that their degrees of cross-immunity and the seasonal effect on their transmission.
  • It fits an ODE model for the two common cold betacoronaviruses, which are related to SARS-CoV-2 (the virus for COVID-19), using the same data.
  • It then adds SARS-CoV-2 to this ODE model, and looks at various scenarios for the future, varying the duration of immunity for SARS-CoV-2, the degree of cross-immunity of SARS-CoV-2 and common cold betacoronaviruses, and the effect of season on SARS-CoV-2 transmission.

In future posts, I’ll discuss the substance of these contributions. In this post, I’ll talk about my efforts at reproducing the results in the paper from the code and data available, which is a prerequisite for examining why the results are as they are, and for looking at how the methods used might be improved.

I’ll also talk about an amusing / horrifying aspect of the R code used, which I encountered along the way, about CDC data sharing policy, and about the authors’ choices regarding some graphical presentations.

The authors released some of the code and data used in the paper in three github repositories:

These repositories correspond roughly (but incompletely) to the three parts of the paper listed above. I’ll talk about reproducing the results of each part in turn.

Estimating and modelling the change of R over time for common cold coronaviruses

The paper uses data from the CDC to estimate how the reproduction number, R, for the four “common cold” coronaviruses has changed over time (from Fall 2014 to Spring 2019, in the US), and uses these estimates for R to estimate the impact of seasonality on transmission for these coronaviruses, the degree of immunity that develops to them, and the degree of cross-immunity between the two betacoronaviruses (HKU1 and OC43). Since SARS-CoV-2 is also a betacoronavirus, one might expect it to behave at least somewhat similarly to the two common cold betacoronaviruses, and for there to perhaps be some cross-immunity between SARS-CoV-2 and the other betacoronaviruses.

Reproducing the estimates for R

The procedure used in the paper to estimate R over time for each of these viruses has several steps:

  • The incidence of infection each week with the common cold coronaviruses was estimated (up to an unknown scaling factor, relating to how likely sick people are to visit a doctor) by multiplying the weekly reports of physician visits for Influenza-Like Illness (ILI) by the weekly percentage of laboratory tests for the four coronaviruses that were positive for each of them.
  • A spline-based procedure was used to interpolate daily incidence of each virus from these weekly numbers .
  • From these daily incidence numbers, estimates of R for each day were obtained using a formula that looks at the incidence that day and the previous 19 days.
  • The daily estimates for R were used to produce weekly estimates for R, by taking the geometric mean of the 21 daily values for the week in question and the previous and following weeks.

The first problem with reproducing these estimates is that although the data on physician visits for ILI is available from here (and included in the first repository above), the CDC allows access to only the last two years of data on positive tests for common cold coronaviruses (from here). According to the README for the repository, “Full data used in paper is available through a data use agreement with the CDC”.

This sort of bullshit makes one wonder about the mentality of the people running the CDC. There is obviously no reason whatever for keeping this data under wraps. Patient confidentiality can’t be an issue, both due to the nature of the data, and to the fact that they do make it public for the last two years. Nor can it be a matter of minimizing work on the part of the CDC — it must take extra effort to keep removing older data so that only two years are available, not to mention the effort of processing data use agreements.

This CDC policy certainly resulted in extra work for the authors of this paper. They included the last two years of publicly-available data in the first repository above, along with R code that had been modified to work with only two years of data rather than five years. The results produced are of course not the same as in the paper.

Fortunately, the second repository above has a data file that in fact includes the full data that was omitted from the first repository. The data can be reformatted to the required form as follows:

dfi <- read.csv("../nCoV_introduction-master/nrevssCDC_ILI.csv",head=TRUE) dfo < as.data.frame( list(RepWeekDate=as.character(as.Date(dfi$WEEKEND),"%m/%d/%y"), CoVHKU1=round(100*dfi$HKU1,7), CoVNL63=round(100*dfi$NL63,7), CoVOC43=round(100*dfi$OC43,7), CoV229E=round(100*dfi$E229,7))) write.table (dfo, "full-Corona4PP_Nat.csv", sep=",", row.names=FALSE, col.names=TRUE, quote=FALSE)

Now the remaining task is to modify the supplied R code in the first repository so it works with the full five years of data. Here are the crucial diffs needed to do this:

-# Data below shared by NREVSS team -df.us_cov_national <- read.csv("Corona4PP_Nat.csv") #2018-03-10 through 2020-02-29 +# Reconstruction of full dataset used in paper. +# Data is for 2014-07-05 through 2019-06-29. +df.us_cov_national <- read.csv("full-Corona4PP_Nat.csv") - Week_start < "2018-07-01" ~ 0, # First season (and only complete season) in this dataset is 2018-19 - (Week_start >= "2018-07-01") & (Week_start < "2019-07-01") ~ 1, - (Week_start >= "2019-07-01") & (Week_start < "2020-07-01") ~ 2)) # 2018-2019 is the last season in our data + Week_start < "2014-07-06" ~ 0, # Before first season + Week_start < "2015-07-05" ~ 1, + Week_start < "2016-07-03" ~ 2, + Week_start < "2017-07-02" ~ 3, + Week_start < "2018-07-01" ~ 4, + Week_start < "2019-06-30" ~ 5, # 2018-2019 is the last season, last data is for 2019-06-29 + TRUE ~ 0)) # after last season -for(s in 1:2){ - temp.df <- df.us_all_national_withR %>% filter(season==s, epi_week>=season_start | epi_week<=season_end) +for(s in 1:5){ + temp.df <- df.us_all_national_withR %>% filter(season==s, epi_week>=season_start | epi_week<=(season_end-(s==1))) # -(s==1) to fudge for 53 weeks in 2014 - season==1 ~ "2018-19", - season==2 ~ "2019-20")) %>% - mutate(season=factor(season, levels=c("1", "2"))) #Set season 1 as reference group in regression -# Note: with this limited dataset, season 2 is incomplete. Full dataset has 5 complete seasons. + season==1 ~ "2014-15", + season==2 ~ "2015-16", + season==3 ~ "2016-17", + season==4 ~ "2017-18", + season==5 ~ "2018-19")) %>% + mutate(season=factor(season, levels=c("1", "2", "3", "4", "5"))) #Set season 1 as reference group in regression

I also added code to produce various plots and other output, some corresponding to plots in the paper or supplemental information, and some for my use in figuring out what the code does. The original code doesn’t come with an open-source license, so I won’t post my full modified source file, but some of the code that I added at the end is here, and some of the plots that it produced are here and here.

A digression about the R code

I will, however, talk about one little snippet of the original program, whose behaviour is… interesting:

RDaily <- numeric() for(u in 1:(length(week_list)*7)){ #Estimate for each day sumt <- 0 for(t in u:(u+stop)){ #Look ahead starting at day u through (u+max SI) suma <- 0 for(a in 0:(stop)){ #Calc denominator, from day t back through (t-max SI) suma = daily_inc[t-a,v]*func.SI_pull(a, serial_int) + suma } sumt = (daily_inc[t,v]*func.SI_pull(t-u, serial_int))/suma + sumt } RDaily[u] = sumt }

This code computes daily estimates for R (putting them in RDaily), using the following formula from the supplemental information:

Notice that the loop for u starts at 1, the loop for t inside that starts at u, and the loop for a inside that starts at 0, and goes up to stop (imax in the formula), whose value is 19. For the first access to daily_inc, the subscript t-a will be 1, the next time, it will be 0, then -1, -2, …, -18. All but the first of these index values seem to be out of bounds. But the program runs without producing an error, and produces reasonable-looking results. How can this be?

Well, R programmers will know that negative indexes are allowed, and extract all items except those identified by the negative subscript. So daily_inc[-1,v] will create a long vector (1819 numbers) without error. It seems like an error should arise later, however, when this results in an attempt to store 1819 numbers into RDaily[u], which has space for only one.

But crucially, before a negative index gets used, there’s an attempt to access daily_inc[0,v]. R programmers may also know that using a zero index is not an error in R, even though R vectors are indexed starting at 1 — zero indexes are just ignored. (I’ve previously written about why this is a bad idea.) When the subscript is a single zero index, ignoring it results in extraction of a zero-length vector.

Now, zero-length vectors also seem like the sort of thing that would lead to some sort of error later on. But R is happy (for good reason) to multiply a zero-length vector by a scalar, with the result being  another zero-length vector. The same is true for addition, so when t-a is 0, the effect is that suma in the innermost loop is set to a zero-length vector. (This is not the same as 0, which is what it was initialized to!)

Only after suma has been set to a zero-length vector does it get multiplied by a vector of length 1891, from accessing daily_inc[-1,v]. R is also happy to multiply a zero-length vector by a vector of length greater than one (though this is rather dubious), with the result being a zero-length vector. So suma stays a zero-length vector for the rest of the inner loop, as daily_inc is accessed with indexes of -1, -2, …, -18. After this loop completes, suma is used to compute a term to add to sumt, with R’s treatment of arithmetic on zero-length vectors resulting in sumt being set to a zero-length vector, and remaining a zero-length vector even when t becomes large enough that accesses with indexes less than one are no longer done.

But it still seems we should get an error! After the loop over t that computes an estimate for R at time u, this estimate is stored with the assignment RDaily[u]=sumt. Since sumt is a zero-length vector, we’d expect an error — we get one with code like x=c(10,20);x[2]=numeric() for example (note that numeric()) creates a zero-length numeric vector). Now, the code is actually extending RDaily, rather than replacing an existing element, but that doesn’t explain the lack of an error, since code like x=c(10,20);x[3]=numeric() also gives an error.

The final crucial point is that all these “out-of-bounds” accesses occur at the beginning of the procedure, when RDaily is itself a zero-length vector. For no clear reason, R does not signal an error for code like x=numeric();x[3]=numeric(), but simply leaves x as a zero-length vector. And so it is in this code, with the result that RDaily is still a zero-length vector after all operations with zero and negative out-of-bounds accesses have been done. At that point, when u is 20, a sensible value for R will be computed, and stored in RDaily[20]. R will automatically extend RDaily from length zero to length 20, with the first 19 values set to NA, and later computations will proceed as expected.

So in the end, the result computed is sensible, with the estimates for R on days for which data on 19 previous days is not available being set to NA, albeit by a mechanism that I’m pretty sure was not envisioned by the programmer. Later on, there are also out-of-bounds accesses past the end of the vector, which also result in NA values rather than errors. All these out-of-bounds references can be avoided by changing the loop over u as follows:

for (u in (stop+1):(length(week_list)*7-stop)) { #Estimate for each day Modeling the effects on R of immunity and seasonality

The code produces estimates of R for each week of the cold season for all four coronaviruses, but attention focuses mainly on the two betacoronaviruses, HKU1 and OC43. A regression model is built for the R values of these viruses in terms of a seasonal effect (modelled as a spline, common to both viruses) and the effects of immunity from exposure to the same virus and of cross-immunity from exposure to the other of the two betacoronaviruses (four coefficients). The immunity effects can only be estimated up to some unknown scaling factor, with the assumption that the sum of weekly incidence numbers up to some point in the season is proportional to the fraction of the population who have been exposed to that virus.

The results I get match the regression coefficients in Table S1 of the paper’s supplemental information, and some additional plots are also as expected given results in the paper.

The seasonal and immunity effects over time are summarized in Figure 1 of the paper. Here is the part of that figure pertaining to HKU1 and the 2015-2016 cold season:

The orange curve shows the estimated multiplicative seasonal effect on R (horizonal dots are at one), the red curve is the estimated effect on R from immunity to HKU1, and the blue curve is the estimated effect from cross-immunity to OC43.

Here is my reproduction of this figure (without attempting to reproduce the error bands):

This seems to perfectly match the plot in the paper, except that the plot in the paper shows only 30 weeks, whereas the model is fit to data for 33 weeks, which is also time span of the spline used to model the seasonal effect. As one can see in my plot, after week 30 (at the bold vertical bar), the modelled seasonal effect on R rises substantially. But this feature of the model fit is not visible in the figures in the paper.

Researchers at Harvard really ought to know that they should not to do this. The rise after week 30 that is not shown in their plots is contrary to the expectation that R will decrease in summer, and is an indication that their modelling procedure may not be good. In particular, after seeing this rise at the end of the season, one might wonder whether the sharp rise in the seasonal effect on R seen at the beginning of the season is actually real, or is instead just an artifact of their spline model.

An ODE model for betacoranavirus incidence

The second major topic of the paper is the fitting of an ODE (Ordinary Differential Equation) model for the incidence of the two common cold betacoronavirues. The data used is the same as for the first part of the paper, but rather than directly estimate R at each time point, an underlying model of the susceptible-exposed-infected-recovered-susceptible (SEIRS) type is used, from which incidence numbers can be derived, and compared to the data.

According the paper and supplemental information, the parameters of the SEIRS model (eg, the degree of seasonal variation, and the rate at which immunity wanes) were fit by a procedure combining latin hypercube sampling (implemented in R) and Nelder-Mead optimization (implemented in Mathematica). The code for these procedures has not been released, however. Hence reproducing this part of the paper is not possible.

The second repository above does contain code to run the SEIRS model, with parameters set to values that are fixed in the code (presumably to the values found by the optimization procedure that they ran).

This SEIRS model produces values for R for each virus and time point, which can be compared to the estimates from the first part of the paper. To do this, the R code for this part of the paper needs to read the estimates for R produced by the R code for the first part. These estimates can be written out as follows:

rmv <- c(1:3,nrow(Reff.CoV_ili_x_pos_pct_SARS):(nrow(Reff.CoV_ili_x_pos_pct_SARS)-2)) write.table (Reff.CoV_ili_x_pos_pct_SARS[-rmv,], "R_ili_x_pos_pct_SARS.csv", row.names=FALSE, col.names=TRUE, quote=FALSE, sep=",")

My modified version of the figuremaker.R R source file provided in the second repository above is here. It has small modifications to read the data as written out above, and to enable production of plots.

One of the plots produced by running this code is an exact reproduction of Figure 2A in the paper:

This plot shows the actual and simulated incidence of the two common cold betacoronaviruses over five cold seasons.

Running the code also produces what should be reproductions of Figures 2B and 2C, in which the values for R produced by the best-fit SEIRS model (the curve) are compared to the weekly estimates for R from the first part of the paper. But these reconstructions do not match the paper. Here is Figure 2B from the paper:

And here is what the code produces:

The curves are the same (apart from vertical scale), but the figure in the paper is missing the first 12 estimates for R, and the first few estimates that follow those are noticeably different.

I found that an exact reproduction of Figures 2B and 2C in the paper can be obtained by re-running the code for estimating R using a data file in which the first eleven estimates for R have been deleted, and in which the code has been changed to say that the first season starts on 2014-09-28 (rather than 2014-07-06). Here is the perfectly-matching result:

Perhaps Figures 2B and 2C in the paper were inadvertently produced using a file of R estimates created using preliminary code that for some reason treated the start of the season differently than the final version of the code. It’s unfortunate that the published figure is somewhat misleading regarding the match between the R estimates from the first part of the paper and the R estimates from the SEIRS model, since this match is significantly worse for the missing data points than for the others.

Projecting the future course of SARS-CoV-2 infection

The final part of the paper extends the ODE model for the two common cold betacoronaviruses to include SARS-CoV-2, considering various possibilities for the characteristics of SARS-CoV-2, such as degree of seasonality and duration of immunity, as well as various interventions such as social distancing.

Although the general structure of this extended model is documented in the supplemental information, the only code relating to these simulations is in the third repository above, which appears to be for a preliminary version of the paper. This code is in the form of a Mathematica notebook (which can be viewed, though not executed, with the free program here). The figures in this notebook resemble those in Figure 3 of the paper, but do not match in detail.

A further-extended model is used to model scenarios regarding health care utilization, and described in the supplemental information. No code is available for this model.

Future posts

This post has been largely confined to finding out whether the results in the paper can be reproduced, and if so how.

For the first part of the paper, in which estimates for R through time were made, and used to examine seasonal and immunity effects, I’ve been able to fully reproduce the results. For the second part, the SEIRS model for infection by common cold coronavirues, the results for specified parameter values can be reproduced, but the optimization method used to find best-fit parameters is not at at all reproducible from the information provided. The third part, in which future scenarios are simulated, is also not reproducible.

In future posts, I’ll discuss the substantive results in the paper, informed where possible by experiments that I can do now that I have code that reproduces some of the results in the paper. I’ll also consider possible improvements in the methods used.

 

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

To leave a comment for the author, please follow the link and comment on their blog: R Programming – Radford Neal's blog. R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job. Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Reasons why data science projects are not always successful – Part 1

Wed, 05/27/2020 - 12:59

[This article was first published on R-Bloggers – eoda GmbH, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here) Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Data science is one of the most wide-ranging disciplines of the 21st century. Data scientists use a wide variety of methods and tools to generate more knowledge from data and its analysis. Especially in times like today, data and the insights we can draw from them are becoming increasingly important. Almost every business process generates and uses data – in fact, almost every one, if you leave out the distinction between digital and analog.

People often think: the use case is defined, the hoped-for business value is clear – but it still happens that not all data science projects create the promised added value. What this may be due to, which factors need to be considered and how, for example, a tool must be set up to help in this case, will be discussed in the following.

1. The proper team

No project can bring company-wide success with just one single user group. Business experts cannot develop analysis scripts, algorithms, or a platform that makes these scripts productive. Software developers can develop the platform but have no influence on the technical infrastructure. Data engineers know the requirements of a high-performance infrastructure but have to rely on the others for development. Data scientists develop the analyses but are dependent on the input of the others for the business context and the infrastructure. Finally, users have to finish the job.

Once the use case has been found, all participants MUST work together! This starts with the conception, continues in the development and ends in a permanent evaluated and productive use of the solution. One solution is the DataOps approach. Here, the basic principle of the DevOps model is used and extended by further tools, methods and the above mentioned actors. The aim is to extend the process to A by the above mentioned groups with the aim of using, optimizing and transforming the (further) development of data analyses and their results into data products. In short, successful projects require the appropriate know-how from business expertise, data science, infrastructure and all those who will work directly with the solution. Does every organisation need a whole football team for this reason? Here too: No! Business context and users are available in every organization. Expertise in the areas of data science, infrastructure and the implementation of a suitable platform can be provided by a strong partner who works closely with the organisation and the people involved. This creates increased acceptance of the projects and their results.

 

2. Acceptance – the user must be in focus

It often happens with digital services that they have to struggle with a lack of acceptance. No solution is promising if the users have more problems than benefits. This can be due to an overload of information, cumbersome dashboards or simply unnecessarily complicated menu navigation.

This must be avoided:

  • Cumbersome usage, which rather hinders the daily business – can be solved with clearly arranged dashboards / views, which can also be individualized.
  • Poor performance of the solution – nobody likes to wait 7 minutes for a report to be generated.
  • Lack of traceability of the results – how did they come about? Focus on transparency instead of black box concepts.
  • Missing or incomprehensible presentation of results – nobody wants charts that are not understood.

Unnecessary switching between different solutions for different use cases – drag a table from solution X and then create a chart using Excel? No thanks! The work must be done without media discontinuity. The good news: The market has reacted and developed appropriate platforms. A data science platform such as YUNA is taking effect due to the modular structure of YUNA. Dashboards can be set up specifically so that only the desired information can be displayed and used. The results can be filtered in a configurable way and each data point of a chart can be traced back to the actual data source, if desired. Run analysis scripts in a robust and high-performance environment regardless of the amount of data or queries. There is also no need to create a dedicated solution for each project. Different projects can be controlled and parts can be reused: An analysis for the development of sales figures, a condition monitoring portal or even a predictive maintenance system – in one platform? With YUNA it is!

As it turns out: the human factor is of immense importance in the successful deployment of data science projects. In this context, in the next article we will take a look at how to reconcile objective and reality and literally go into the basis – the database.

Are you looking for an innovative data science platform? Then discover YUNA!

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

To leave a comment for the author, please follow the link and comment on their blog: R-Bloggers – eoda GmbH. R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job. Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Analyzing data from COVID19 R package

Wed, 05/27/2020 - 02:00

[This article was first published on R | TypeThePipe, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here) Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.




a.sourceLine { display: inline-block; line-height: 1.25; } a.sourceLine { pointer-events: none; color: inherit; text-decoration: inherit; } a.sourceLine:empty { height: 1.2em; } .sourceCode { overflow: visible; } code.sourceCode { white-space: pre; position: relative; } div.sourceCode { margin: 1em 0; } pre.sourceCode { margin: 0; } @media screen { div.sourceCode { overflow: auto; } } @media print { code.sourceCode { white-space: pre-wrap; } a.sourceLine { text-indent: -1em; padding-left: 1em; } } pre.numberSource a.sourceLine { position: relative; left: -4em; } pre.numberSource a.sourceLine::before { content: attr(title); position: relative; left: -1em; text-align: right; vertical-align: baseline; border: none; pointer-events: all; display: inline-block; -webkit-touch-callout: none; -webkit-user-select: none; -khtml-user-select: none; -moz-user-select: none; -ms-user-select: none; user-select: none; padding: 0 4px; width: 4em; color: #aaaaaa; } pre.numberSource { margin-left: 3em; border-left: 1px solid #aaaaaa; padding-left: 4px; } div.sourceCode { background-color: #f8f8f8; } @media screen { a.sourceLine::before { text-decoration: underline; } } code span.al { color: #ef2929; } /* Alert */ code span.an { color: #8f5902; font-weight: bold; font-style: italic; } /* Annotation */ code span.at { color: #c4a000; } /* Attribute */ code span.bn { color: #0000cf; } /* BaseN */ code span.cf { color: #204a87; font-weight: bold; } /* ControlFlow */ code span.ch { color: #4e9a06; } /* Char */ code span.cn { color: #000000; } /* Constant */ code span.co { color: #8f5902; font-style: italic; } /* Comment */ code span.cv { color: #8f5902; font-weight: bold; font-style: italic; } /* CommentVar */ code span.do { color: #8f5902; font-weight: bold; font-style: italic; } /* Documentation */ code span.dt { color: #204a87; } /* DataType */ code span.dv { color: #0000cf; } /* DecVal */ code span.er { color: #a40000; font-weight: bold; } /* Error */ code span.ex { } /* Extension */ code span.fl { color: #0000cf; } /* Float */ code span.fu { color: #000000; } /* Function */ code span.im { } /* Import */ code span.in { color: #8f5902; font-weight: bold; font-style: italic; } /* Information */ code span.kw { color: #204a87; font-weight: bold; } /* Keyword */ code span.op { color: #ce5c00; font-weight: bold; } /* Operator */ code span.ot { color: #8f5902; } /* Other */ code span.pp { color: #8f5902; font-style: italic; } /* Preprocessor */ code span.sc { color: #000000; } /* SpecialChar */ code span.ss { color: #4e9a06; } /* SpecialString */ code span.st { color: #4e9a06; } /* String */ code span.va { color: #000000; } /* Variable */ code span.vs { color: #4e9a06; } /* VerbatimString */ code span.wa { color: #8f5902; font-weight: bold; font-style: italic; } /* Warning */ Introduction

The idea behind this post was to play and discover some of the info contained in the COVID19 R package which collects data across several governmental sources.This package is being developed by the Guidotti and Ardia from COVID19 Data Hub.

Later, I will add to the analysis the historical track record of deaths over last years for some european countries and try to address if deaths by COVID19 are being reported accurately. This data is collected in The Human Mortality Database.

Altough it may seem a bit overkill, it was such an intensive Tidyverse exercise that I decided to show pretty much all the code right here because that is what this post is about: I don’t intend to perform a really deep analysis but to show a kind of simple way to tackle this problem using R and the Tidyverse toolkit.

You might pick up a couple of tricks like the use of split_group() + map() to manipulate each group freely, using the {{}} (bang bang operator) to write programatic dplyr code, some custom plotting with plotly or the recently discovered package ggtext by @ClausWilke.

Playing with COVID19 package

Let’s start by loading data from COVID19 package with covid19 function. It contains lots of information, but I will keep things simple and work only with Country, Date, Population and Deaths variables.

covid_deaths < - covid19(verbose = FALSE) %>% ungroup() %>% mutate(Week = week(date)) %>% select(Country = id, Date = date, Week, Deaths = deaths, Population = population) %>% filter(Date < today() %>% add(days(-2))) %>% mutate(Deaths_by_1Mpop = round(Deaths/Population*1e6))

I wanted to focus mainly on the most populated countries of the world because some of them are among the most affected by the virus, so I created a function for that as I will use it more than once.

get_top_countries_df < - function(covid_deaths, top_by, top_n, since){ covid_deaths %>% group_by(Date) %>% top_n(100, Population) %>% group_by(Country) %>% filter(Date == max(Date)) %>% ungroup() %>% top_n(top_n, {{top_by}}) %>% select(Country) %>% inner_join(covid_deaths, ., by = "Country") %>% filter(Date >= ymd(since)) }

Starting with a basic plot. You have already seen this one a thousand of times.

ggplotly( covid_deaths %>% get_top_countries_df(top_by = Deaths, top_n = 10, since = 20200301) %>% ggplot(aes(Date, Deaths, col = Country)) + geom_line(size = 1, show.legend = F) + labs(title = "Total deaths due to COVID-19", caption = "Source: covid19datahub.io") + theme_minimal() + theme_custom() + scale_color_tableau() + NULL ) %>% layout(legend = list(orientation = "h", y = 0), annotations = list( x = 1, y = 1.05, text = "Source: covid19datahub.io", showarrow = F, xref = 'paper', yref = 'paper', font = list(size = 10) ) )

{"x":{"data":[{"x":[18322,18323,18324,18325,18326,18327,18328,18329,18330,18331,18332,18333,18334,18335,18336,18337,18338,18339,18340,18341,18342,18343,18344,18345,18346,18347,18348,18349,18350,18351,18352,18353,18354,18355,18356,18357,18358,18359,18360,18361,18362,18363,18364,18365,18366,18367,18368,18369,18370,18371,18372,18373,18374,18375,18376,18377,18378,18379,18380,18381,18382,18383,18384,18385,18386,18387,18388,18389,18390,18391,18392,18393,18394,18395,18396,18397,18398,18399,18400,18401,18402,18403,18404,18405,18406],"y":[0,0,0,0,0,0,0,0,0,1,4,5,8,12,18,29,40,61,88,119,157,198,276,350,448,549,672,805,944,1113,1298,1545,1755,1986,2260,2494,2768,3072,3393,3665,3984,4274,4617,4901,5178,5454,5708,5912,6123,6337,6536,6738,6930,7105,7252,7407,7583,7709,7824,7922,8006,8104,8177,8273,8370,8451,8526,8605,8675,8745,8818,8880,8928,8982,9028,9065,9093,9128,9164,9194,9226,9255,9291,9314,9346],"text":["Date: 2020-03-01
Deaths: 0
Country: BEL","Date: 2020-03-02
Deaths: 0
Country: BEL","Date: 2020-03-03
Deaths: 0
Country: BEL","Date: 2020-03-04
Deaths: 0
Country: BEL","Date: 2020-03-05
Deaths: 0
Country: BEL","Date: 2020-03-06
Deaths: 0
Country: BEL","Date: 2020-03-07
Deaths: 0
Country: BEL","Date: 2020-03-08
Deaths: 0
Country: BEL","Date: 2020-03-09
Deaths: 0
Country: BEL","Date: 2020-03-10
Deaths: 1
Country: BEL","Date: 2020-03-11
Deaths: 4
Country: BEL","Date: 2020-03-12
Deaths: 5
Country: BEL","Date: 2020-03-13
Deaths: 8
Country: BEL","Date: 2020-03-14
Deaths: 12
Country: BEL","Date: 2020-03-15
Deaths: 18
Country: BEL","Date: 2020-03-16
Deaths: 29
Country: BEL","Date: 2020-03-17
Deaths: 40
Country: BEL","Date: 2020-03-18
Deaths: 61
Country: BEL","Date: 2020-03-19
Deaths: 88
Country: BEL","Date: 2020-03-20
Deaths: 119
Country: BEL","Date: 2020-03-21
Deaths: 157
Country: BEL","Date: 2020-03-22
Deaths: 198
Country: BEL","Date: 2020-03-23
Deaths: 276
Country: BEL","Date: 2020-03-24
Deaths: 350
Country: BEL","Date: 2020-03-25
Deaths: 448
Country: BEL","Date: 2020-03-26
Deaths: 549
Country: BEL","Date: 2020-03-27
Deaths: 672
Country: BEL","Date: 2020-03-28
Deaths: 805
Country: BEL","Date: 2020-03-29
Deaths: 944
Country: BEL","Date: 2020-03-30
Deaths: 1113
Country: BEL","Date: 2020-03-31
Deaths: 1298
Country: BEL","Date: 2020-04-01
Deaths: 1545
Country: BEL","Date: 2020-04-02
Deaths: 1755
Country: BEL","Date: 2020-04-03
Deaths: 1986
Country: BEL","Date: 2020-04-04
Deaths: 2260
Country: BEL","Date: 2020-04-05
Deaths: 2494
Country: BEL","Date: 2020-04-06
Deaths: 2768
Country: BEL","Date: 2020-04-07
Deaths: 3072
Country: BEL","Date: 2020-04-08
Deaths: 3393
Country: BEL","Date: 2020-04-09
Deaths: 3665
Country: BEL","Date: 2020-04-10
Deaths: 3984
Country: BEL","Date: 2020-04-11
Deaths: 4274
Country: BEL","Date: 2020-04-12
Deaths: 4617
Country: BEL","Date: 2020-04-13
Deaths: 4901
Country: BEL","Date: 2020-04-14
Deaths: 5178
Country: BEL","Date: 2020-04-15
Deaths: 5454
Country: BEL","Date: 2020-04-16
Deaths: 5708
Country: BEL","Date: 2020-04-17
Deaths: 5912
Country: BEL","Date: 2020-04-18
Deaths: 6123
Country: BEL","Date: 2020-04-19
Deaths: 6337
Country: BEL","Date: 2020-04-20
Deaths: 6536
Country: BEL","Date: 2020-04-21
Deaths: 6738
Country: BEL","Date: 2020-04-22
Deaths: 6930
Country: BEL","Date: 2020-04-23
Deaths: 7105
Country: BEL","Date: 2020-04-24
Deaths: 7252
Country: BEL","Date: 2020-04-25
Deaths: 7407
Country: BEL","Date: 2020-04-26
Deaths: 7583
Country: BEL","Date: 2020-04-27
Deaths: 7709
Country: BEL","Date: 2020-04-28
Deaths: 7824
Country: BEL","Date: 2020-04-29
Deaths: 7922
Country: BEL","Date: 2020-04-30
Deaths: 8006
Country: BEL","Date: 2020-05-01
Deaths: 8104
Country: BEL","Date: 2020-05-02
Deaths: 8177
Country: BEL","Date: 2020-05-03
Deaths: 8273
Country: BEL","Date: 2020-05-04
Deaths: 8370
Country: BEL","Date: 2020-05-05
Deaths: 8451
Country: BEL","Date: 2020-05-06
Deaths: 8526
Country: BEL","Date: 2020-05-07
Deaths: 8605
Country: BEL","Date: 2020-05-08
Deaths: 8675
Country: BEL","Date: 2020-05-09
Deaths: 8745
Country: BEL","Date: 2020-05-10
Deaths: 8818
Country: BEL","Date: 2020-05-11
Deaths: 8880
Country: BEL","Date: 2020-05-12
Deaths: 8928
Country: BEL","Date: 2020-05-13
Deaths: 8982
Country: BEL","Date: 2020-05-14
Deaths: 9028
Country: BEL","Date: 2020-05-15
Deaths: 9065
Country: BEL","Date: 2020-05-16
Deaths: 9093
Country: BEL","Date: 2020-05-17
Deaths: 9128
Country: BEL","Date: 2020-05-18
Deaths: 9164
Country: BEL","Date: 2020-05-19
Deaths: 9194
Country: BEL","Date: 2020-05-20
Deaths: 9226
Country: BEL","Date: 2020-05-21
Deaths: 9255
Country: BEL","Date: 2020-05-22
Deaths: 9291
Country: BEL","Date: 2020-05-23
Deaths: 9314
Country: BEL","Date: 2020-05-24
Deaths: 9346
Country: BEL"],"type":"scatter","mode":"lines","line":{"width":3.77952755905512,"color":"rgba(78,121,167,1)","dash":"solid"},"hoveron":"points","name":"BEL","legendgroup":"BEL","showlegend":true,"xaxis":"x","yaxis":"y","hoverinfo":"text","frame":null},{"x":[18322,18323,18324,18325,18326,18327,18328,18329,18330,18331,18332,18333,18334,18335,18336,18337,18338,18339,18340,18341,18342,18343,18344,18345,18346,18347,18348,18349,18350,18351,18352,18353,18354,18355,18356,18357,18358,18359,18360,18361,18362,18363,18364,18365,18366,18367,18368,18369,18370,18371,18372,18373,18374,18375,18376,18377,18378,18379,18380,18381,18382,18383,18384,18385,18386,18387,18388,18389,18390,18391,18392,18393,18394,18395,18396,18397,18398,18399,18400,18401,18402,18403,18404,18405,18406],"y":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,6,11,15,25,34,46,59,77,92,111,136,159,201,240,324,359,445,486,564,686,819,950,1057,1124,1223,1328,1532,1736,1924,2141,2354,2462,2587,2741,2906,3331,3704,4057,4286,4603,5083,5513,6006,6412,6761,7051,7367,7938,8588,9190,10017,10656,11123,11653,12461,13240,13999,14962,15662,16118,16853,17983,18859,20047,21048,22013,22666],"text":["Date: 2020-03-01
Deaths: 0
Country: BRA","Date: 2020-03-02
Deaths: 0
Country: BRA","Date: 2020-03-03
Deaths: 0
Country: BRA","Date: 2020-03-04
Deaths: 0
Country: BRA","Date: 2020-03-05
Deaths: 0
Country: BRA","Date: 2020-03-06
Deaths: 0
Country: BRA","Date: 2020-03-07
Deaths: 0
Country: BRA","Date: 2020-03-08
Deaths: 0
Country: BRA","Date: 2020-03-09
Deaths: 0
Country: BRA","Date: 2020-03-10
Deaths: 0
Country: BRA","Date: 2020-03-11
Deaths: 0
Country: BRA","Date: 2020-03-12
Deaths: 0
Country: BRA","Date: 2020-03-13
Deaths: 0
Country: BRA","Date: 2020-03-14
Deaths: 0
Country: BRA","Date: 2020-03-15
Deaths: 0
Country: BRA","Date: 2020-03-16
Deaths: 0
Country: BRA","Date: 2020-03-17
Deaths: 1
Country: BRA","Date: 2020-03-18
Deaths: 3
Country: BRA","Date: 2020-03-19
Deaths: 6
Country: BRA","Date: 2020-03-20
Deaths: 11
Country: BRA","Date: 2020-03-21
Deaths: 15
Country: BRA","Date: 2020-03-22
Deaths: 25
Country: BRA","Date: 2020-03-23
Deaths: 34
Country: BRA","Date: 2020-03-24
Deaths: 46
Country: BRA","Date: 2020-03-25
Deaths: 59
Country: BRA","Date: 2020-03-26
Deaths: 77
Country: BRA","Date: 2020-03-27
Deaths: 92
Country: BRA","Date: 2020-03-28
Deaths: 111
Country: BRA","Date: 2020-03-29
Deaths: 136
Country: BRA","Date: 2020-03-30
Deaths: 159
Country: BRA","Date: 2020-03-31
Deaths: 201
Country: BRA","Date: 2020-04-01
Deaths: 240
Country: BRA","Date: 2020-04-02
Deaths: 324
Country: BRA","Date: 2020-04-03
Deaths: 359
Country: BRA","Date: 2020-04-04
Deaths: 445
Country: BRA","Date: 2020-04-05
Deaths: 486
Country: BRA","Date: 2020-04-06
Deaths: 564
Country: BRA","Date: 2020-04-07
Deaths: 686
Country: BRA","Date: 2020-04-08
Deaths: 819
Country: BRA","Date: 2020-04-09
Deaths: 950
Country: BRA","Date: 2020-04-10
Deaths: 1057
Country: BRA","Date: 2020-04-11
Deaths: 1124
Country: BRA","Date: 2020-04-12
Deaths: 1223
Country: BRA","Date: 2020-04-13
Deaths: 1328
Country: BRA","Date: 2020-04-14
Deaths: 1532
Country: BRA","Date: 2020-04-15
Deaths: 1736
Country: BRA","Date: 2020-04-16
Deaths: 1924
Country: BRA","Date: 2020-04-17
Deaths: 2141
Country: BRA","Date: 2020-04-18
Deaths: 2354
Country: BRA","Date: 2020-04-19
Deaths: 2462
Country: BRA","Date: 2020-04-20
Deaths: 2587
Country: BRA","Date: 2020-04-21
Deaths: 2741
Country: BRA","Date: 2020-04-22
Deaths: 2906
Country: BRA","Date: 2020-04-23
Deaths: 3331
Country: BRA","Date: 2020-04-24
Deaths: 3704
Country: BRA","Date: 2020-04-25
Deaths: 4057
Country: BRA","Date: 2020-04-26
Deaths: 4286
Country: BRA","Date: 2020-04-27
Deaths: 4603
Country: BRA","Date: 2020-04-28
Deaths: 5083
Country: BRA","Date: 2020-04-29
Deaths: 5513
Country: BRA","Date: 2020-04-30
Deaths: 6006
Country: BRA","Date: 2020-05-01
Deaths: 6412
Country: BRA","Date: 2020-05-02
Deaths: 6761
Country: BRA","Date: 2020-05-03
Deaths: 7051
Country: BRA","Date: 2020-05-04
Deaths: 7367
Country: BRA","Date: 2020-05-05
Deaths: 7938
Country: BRA","Date: 2020-05-06
Deaths: 8588
Country: BRA","Date: 2020-05-07
Deaths: 9190
Country: BRA","Date: 2020-05-08
Deaths: 10017
Country: BRA","Date: 2020-05-09
Deaths: 10656
Country: BRA","Date: 2020-05-10
Deaths: 11123
Country: BRA","Date: 2020-05-11
Deaths: 11653
Country: BRA","Date: 2020-05-12
Deaths: 12461
Country: BRA","Date: 2020-05-13
Deaths: 13240
Country: BRA","Date: 2020-05-14
Deaths: 13999
Country: BRA","Date: 2020-05-15
Deaths: 14962
Country: BRA","Date: 2020-05-16
Deaths: 15662
Country: BRA","Date: 2020-05-17
Deaths: 16118
Country: BRA","Date: 2020-05-18
Deaths: 16853
Country: BRA","Date: 2020-05-19
Deaths: 17983
Country: BRA","Date: 2020-05-20
Deaths: 18859
Country: BRA","Date: 2020-05-21
Deaths: 20047
Country: BRA","Date: 2020-05-22
Deaths: 21048
Country: BRA","Date: 2020-05-23
Deaths: 22013
Country: BRA","Date: 2020-05-24
Deaths: 22666
Country: BRA"],"type":"scatter","mode":"lines","line":{"width":3.77952755905512,"color":"rgba(242,142,43,1)","dash":"solid"},"hoveron":"points","name":"BRA","legendgroup":"BRA","showlegend":true,"xaxis":"x","yaxis":"y","hoverinfo":"text","frame":null},{"x":[18322,18323,18324,18325,18326,18327,18328,18329,18330,18331,18332,18333,18334,18335,18336,18337,18338,18339,18340,18341,18342,18343,18344,18345,18346,18347,18348,18349,18350,18351,18352,18353,18354,18355,18356,18357,18358,18359,18360,18361,18362,18363,18364,18365,18366,18367,18368,18369,18370,18371,18372,18373,18374,18375,18376,18377,18378,18379,18380,18381,18382,18383,18384,18385,18386,18387,18388,18389,18390,18391,18392,18393,18394,18395,18396,18397,18398,18399,18400,18401,18402,18403,18404,18405,18406],"y":[0,0,0,0,0,0,0,0,2,2,3,3,7,9,11,17,24,28,44,67,84,94,123,157,206,267,342,433,533,645,775,920,1107,1275,1444,1584,1810,2016,2349,2607,2767,2736,3022,3194,3294,3804,4052,4352,4459,4586,4862,5033,5279,5575,5760,5877,5976,6126,6314,6467,6623,6736,6812,6866,6993,6993,7275,7392,7510,7549,7569,7661,7738,7861,7884,7897,7938,7962,8003,8081,8144,8203,8228,8261,8283],"text":["Date: 2020-03-01
Deaths: 0
Country: DEU","Date: 2020-03-02
Deaths: 0
Country: DEU","Date: 2020-03-03
Deaths: 0
Country: DEU","Date: 2020-03-04
Deaths: 0
Country: DEU","Date: 2020-03-05
Deaths: 0
Country: DEU","Date: 2020-03-06
Deaths: 0
Country: DEU","Date: 2020-03-07
Deaths: 0
Country: DEU","Date: 2020-03-08
Deaths: 0
Country: DEU","Date: 2020-03-09
Deaths: 2
Country: DEU","Date: 2020-03-10
Deaths: 2
Country: DEU","Date: 2020-03-11
Deaths: 3
Country: DEU","Date: 2020-03-12
Deaths: 3
Country: DEU","Date: 2020-03-13
Deaths: 7
Country: DEU","Date: 2020-03-14
Deaths: 9
Country: DEU","Date: 2020-03-15
Deaths: 11
Country: DEU","Date: 2020-03-16
Deaths: 17
Country: DEU","Date: 2020-03-17
Deaths: 24
Country: DEU","Date: 2020-03-18
Deaths: 28
Country: DEU","Date: 2020-03-19
Deaths: 44
Country: DEU","Date: 2020-03-20
Deaths: 67
Country: DEU","Date: 2020-03-21
Deaths: 84
Country: DEU","Date: 2020-03-22
Deaths: 94
Country: DEU","Date: 2020-03-23
Deaths: 123
Country: DEU","Date: 2020-03-24
Deaths: 157
Country: DEU","Date: 2020-03-25
Deaths: 206
Country: DEU","Date: 2020-03-26
Deaths: 267
Country: DEU","Date: 2020-03-27
Deaths: 342
Country: DEU","Date: 2020-03-28
Deaths: 433
Country: DEU","Date: 2020-03-29
Deaths: 533
Country: DEU","Date: 2020-03-30
Deaths: 645
Country: DEU","Date: 2020-03-31
Deaths: 775
Country: DEU","Date: 2020-04-01
Deaths: 920
Country: DEU","Date: 2020-04-02
Deaths: 1107
Country: DEU","Date: 2020-04-03
Deaths: 1275
Country: DEU","Date: 2020-04-04
Deaths: 1444
Country: DEU","Date: 2020-04-05
Deaths: 1584
Country: DEU","Date: 2020-04-06
Deaths: 1810
Country: DEU","Date: 2020-04-07
Deaths: 2016
Country: DEU","Date: 2020-04-08
Deaths: 2349
Country: DEU","Date: 2020-04-09
Deaths: 2607
Country: DEU","Date: 2020-04-10
Deaths: 2767
Country: DEU","Date: 2020-04-11
Deaths: 2736
Country: DEU","Date: 2020-04-12
Deaths: 3022
Country: DEU","Date: 2020-04-13
Deaths: 3194
Country: DEU","Date: 2020-04-14
Deaths: 3294
Country: DEU","Date: 2020-04-15
Deaths: 3804
Country: DEU","Date: 2020-04-16
Deaths: 4052
Country: DEU","Date: 2020-04-17
Deaths: 4352
Country: DEU","Date: 2020-04-18
Deaths: 4459
Country: DEU","Date: 2020-04-19
Deaths: 4586
Country: DEU","Date: 2020-04-20
Deaths: 4862
Country: DEU","Date: 2020-04-21
Deaths: 5033
Country: DEU","Date: 2020-04-22
Deaths: 5279
Country: DEU","Date: 2020-04-23
Deaths: 5575
Country: DEU","Date: 2020-04-24
Deaths: 5760
Country: DEU","Date: 2020-04-25
Deaths: 5877
Country: DEU","Date: 2020-04-26
Deaths: 5976
Country: DEU","Date: 2020-04-27
Deaths: 6126
Country: DEU","Date: 2020-04-28
Deaths: 6314
Country: DEU","Date: 2020-04-29
Deaths: 6467
Country: DEU","Date: 2020-04-30
Deaths: 6623
Country: DEU","Date: 2020-05-01
Deaths: 6736
Country: DEU","Date: 2020-05-02
Deaths: 6812
Country: DEU","Date: 2020-05-03
Deaths: 6866
Country: DEU","Date: 2020-05-04
Deaths: 6993
Country: DEU","Date: 2020-05-05
Deaths: 6993
Country: DEU","Date: 2020-05-06
Deaths: 7275
Country: DEU","Date: 2020-05-07
Deaths: 7392
Country: DEU","Date: 2020-05-08
Deaths: 7510
Country: DEU","Date: 2020-05-09
Deaths: 7549
Country: DEU","Date: 2020-05-10
Deaths: 7569
Country: DEU","Date: 2020-05-11
Deaths: 7661
Country: DEU","Date: 2020-05-12
Deaths: 7738
Country: DEU","Date: 2020-05-13
Deaths: 7861
Country: DEU","Date: 2020-05-14
Deaths: 7884
Country: DEU","Date: 2020-05-15
Deaths: 7897
Country: DEU","Date: 2020-05-16
Deaths: 7938
Country: DEU","Date: 2020-05-17
Deaths: 7962
Country: DEU","Date: 2020-05-18
Deaths: 8003
Country: DEU","Date: 2020-05-19
Deaths: 8081
Country: DEU","Date: 2020-05-20
Deaths: 8144
Country: DEU","Date: 2020-05-21
Deaths: 8203
Country: DEU","Date: 2020-05-22
Deaths: 8228
Country: DEU","Date: 2020-05-23
Deaths: 8261
Country: DEU","Date: 2020-05-24
Deaths: 8283
Country: DEU"],"type":"scatter","mode":"lines","line":{"width":3.77952755905512,"color":"rgba(225,87,89,1)","dash":"solid"},"hoveron":"points","name":"DEU","legendgroup":"DEU","showlegend":true,"xaxis":"x","yaxis":"y","hoverinfo":"text","frame":null},{"x":[18322,18323,18324,18325,18326,18327,18328,18329,18330,18331,18332,18333,18334,18335,18336,18337,18338,18339,18340,18341,18342,18343,18344,18345,18346,18347,18348,18349,18350,18351,18352,18353,18354,18355,18356,18357,18358,18359,18360,18361,18362,18363,18364,18365,18366,18367,18368,18369,18370,18371,18372,18373,18374,18375,18376,18377,18378,18379,18380,18381,18382,18383,18384,18385,18386,18387,18388,18389,18390,18391,18392,18393,18394,18395,18396,18397,18398,18399,18400,18401,18402,18403,18404,18405,18406],"y":[0,0,1,2,3,5,10,17,28,35,54,55,133,195,289,342,533,623,830,1043,1375,1772,2311,2808,3647,4365,5138,5982,6803,7716,8464,9387,10348,11198,11947,12641,13341,14045,14792,15447,16081,16606,17209,17756,18056,18708,19315,20002,20043,20453,20852,21282,21717,22157,22524,22902,23190,23521,23822,24275,24543,24543,25100,25264,25428,25613,25857,26070,26299,26478,26621,26744,26920,27104,27321,27459,27563,27563,27709,27778,27888,27940,28628,28678,28752],"text":["Date: 2020-03-01
Deaths: 0
Country: ESP","Date: 2020-03-02
Deaths: 0
Country: ESP","Date: 2020-03-03
Deaths: 1
Country: ESP","Date: 2020-03-04
Deaths: 2
Country: ESP","Date: 2020-03-05
Deaths: 3
Country: ESP","Date: 2020-03-06
Deaths: 5
Country: ESP","Date: 2020-03-07
Deaths: 10
Country: ESP","Date: 2020-03-08
Deaths: 17
Country: ESP","Date: 2020-03-09
Deaths: 28
Country: ESP","Date: 2020-03-10
Deaths: 35
Country: ESP","Date: 2020-03-11
Deaths: 54
Country: ESP","Date: 2020-03-12
Deaths: 55
Country: ESP","Date: 2020-03-13
Deaths: 133
Country: ESP","Date: 2020-03-14
Deaths: 195
Country: ESP","Date: 2020-03-15
Deaths: 289
Country: ESP","Date: 2020-03-16
Deaths: 342
Country: ESP","Date: 2020-03-17
Deaths: 533
Country: ESP","Date: 2020-03-18
Deaths: 623
Country: ESP","Date: 2020-03-19
Deaths: 830
Country: ESP","Date: 2020-03-20
Deaths: 1043
Country: ESP","Date: 2020-03-21
Deaths: 1375
Country: ESP","Date: 2020-03-22
Deaths: 1772
Country: ESP","Date: 2020-03-23
Deaths: 2311
Country: ESP","Date: 2020-03-24
Deaths: 2808
Country: ESP","Date: 2020-03-25
Deaths: 3647
Country: ESP","Date: 2020-03-26
Deaths: 4365
Country: ESP","Date: 2020-03-27
Deaths: 5138
Country: ESP","Date: 2020-03-28
Deaths: 5982
Country: ESP","Date: 2020-03-29
Deaths: 6803
Country: ESP","Date: 2020-03-30
Deaths: 7716
Country: ESP","Date: 2020-03-31
Deaths: 8464
Country: ESP","Date: 2020-04-01
Deaths: 9387
Country: ESP","Date: 2020-04-02
Deaths: 10348
Country: ESP","Date: 2020-04-03
Deaths: 11198
Country: ESP","Date: 2020-04-04
Deaths: 11947
Country: ESP","Date: 2020-04-05
Deaths: 12641
Country: ESP","Date: 2020-04-06
Deaths: 13341
Country: ESP","Date: 2020-04-07
Deaths: 14045
Country: ESP","Date: 2020-04-08
Deaths: 14792
Country: ESP","Date: 2020-04-09
Deaths: 15447
Country: ESP","Date: 2020-04-10
Deaths: 16081
Country: ESP","Date: 2020-04-11
Deaths: 16606
Country: ESP","Date: 2020-04-12
Deaths: 17209
Country: ESP","Date: 2020-04-13
Deaths: 17756
Country: ESP","Date: 2020-04-14
Deaths: 18056
Country: ESP","Date: 2020-04-15
Deaths: 18708
Country: ESP","Date: 2020-04-16
Deaths: 19315
Country: ESP","Date: 2020-04-17
Deaths: 20002
Country: ESP","Date: 2020-04-18
Deaths: 20043
Country: ESP","Date: 2020-04-19
Deaths: 20453
Country: ESP","Date: 2020-04-20
Deaths: 20852
Country: ESP","Date: 2020-04-21
Deaths: 21282
Country: ESP","Date: 2020-04-22
Deaths: 21717
Country: ESP","Date: 2020-04-23
Deaths: 22157
Country: ESP","Date: 2020-04-24
Deaths: 22524
Country: ESP","Date: 2020-04-25
Deaths: 22902
Country: ESP","Date: 2020-04-26
Deaths: 23190
Country: ESP","Date: 2020-04-27
Deaths: 23521
Country: ESP","Date: 2020-04-28
Deaths: 23822
Country: ESP","Date: 2020-04-29
Deaths: 24275
Country: ESP","Date: 2020-04-30
Deaths: 24543
Country: ESP","Date: 2020-05-01
Deaths: 24543
Country: ESP","Date: 2020-05-02
Deaths: 25100
Country: ESP","Date: 2020-05-03
Deaths: 25264
Country: ESP","Date: 2020-05-04
Deaths: 25428
Country: ESP","Date: 2020-05-05
Deaths: 25613
Country: ESP","Date: 2020-05-06
Deaths: 25857
Country: ESP","Date: 2020-05-07
Deaths: 26070
Country: ESP","Date: 2020-05-08
Deaths: 26299
Country: ESP","Date: 2020-05-09
Deaths: 26478
Country: ESP","Date: 2020-05-10
Deaths: 26621
Country: ESP","Date: 2020-05-11
Deaths: 26744
Country: ESP","Date: 2020-05-12
Deaths: 26920
Country: ESP","Date: 2020-05-13
Deaths: 27104
Country: ESP","Date: 2020-05-14
Deaths: 27321
Country: ESP","Date: 2020-05-15
Deaths: 27459
Country: ESP","Date: 2020-05-16
Deaths: 27563
Country: ESP","Date: 2020-05-17
Deaths: 27563
Country: ESP","Date: 2020-05-18
Deaths: 27709
Country: ESP","Date: 2020-05-19
Deaths: 27778
Country: ESP","Date: 2020-05-20
Deaths: 27888
Country: ESP","Date: 2020-05-21
Deaths: 27940
Country: ESP","Date: 2020-05-22
Deaths: 28628
Country: ESP","Date: 2020-05-23
Deaths: 28678
Country: ESP","Date: 2020-05-24
Deaths: 28752
Country: ESP"],"type":"scatter","mode":"lines","line":{"width":3.77952755905512,"color":"rgba(118,183,178,1)","dash":"solid"},"hoveron":"points","name":"ESP","legendgroup":"ESP","showlegend":true,"xaxis":"x","yaxis":"y","hoverinfo":"text","frame":null},{"x":[18322,18323,18324,18325,18326,18327,18328,18329,18330,18331,18332,18333,18334,18335,18336,18337,18338,18339,18340,18341,18342,18343,18344,18345,18346,18347,18348,18349,18350,18351,18352,18353,18354,18355,18356,18357,18358,18359,18360,18361,18362,18363,18364,18365,18366,18367,18368,18369,18370,18371,18372,18373,18374,18375,18376,18377,18378,18379,18380,18381,18382,18383,18384,18385,18386,18387,18388,18389,18390,18391,18392,18393,18394,18395,18396,18397,18398,18399,18400,18401,18402,18403,18404,18405,18406],"y":[2,3,4,4,6,9,11,19,19,33,48,48,79,91,91,148,148,148,243,450,562,674,860,1100,1331,1696,1995,2314,2606,3024,3523,4403,5387,6507,7560,8078,8911,10328,10869,12210,13197,13832,14393,14967,15729,17167,17920,18681,19323,19718,20265,20796,21340,21856,22245,22614,22856,23293,23660,24087,24376,24594,24760,24895,25201,25531,25809,25987,26230,26310,26380,26643,26991,27074,27425,27529,27625,28108,28239,28022,28132,28215,28289,28332,28367],"text":["Date: 2020-03-01
Deaths: 2
Country: FRA","Date: 2020-03-02
Deaths: 3
Country: FRA","Date: 2020-03-03
Deaths: 4
Country: FRA","Date: 2020-03-04
Deaths: 4
Country: FRA","Date: 2020-03-05
Deaths: 6
Country: FRA","Date: 2020-03-06
Deaths: 9
Country: FRA","Date: 2020-03-07
Deaths: 11
Country: FRA","Date: 2020-03-08
Deaths: 19
Country: FRA","Date: 2020-03-09
Deaths: 19
Country: FRA","Date: 2020-03-10
Deaths: 33
Country: FRA","Date: 2020-03-11
Deaths: 48
Country: FRA","Date: 2020-03-12
Deaths: 48
Country: FRA","Date: 2020-03-13
Deaths: 79
Country: FRA","Date: 2020-03-14
Deaths: 91
Country: FRA","Date: 2020-03-15
Deaths: 91
Country: FRA","Date: 2020-03-16
Deaths: 148
Country: FRA","Date: 2020-03-17
Deaths: 148
Country: FRA","Date: 2020-03-18
Deaths: 148
Country: FRA","Date: 2020-03-19
Deaths: 243
Country: FRA","Date: 2020-03-20
Deaths: 450
Country: FRA","Date: 2020-03-21
Deaths: 562
Country: FRA","Date: 2020-03-22
Deaths: 674
Country: FRA","Date: 2020-03-23
Deaths: 860
Country: FRA","Date: 2020-03-24
Deaths: 1100
Country: FRA","Date: 2020-03-25
Deaths: 1331
Country: FRA","Date: 2020-03-26
Deaths: 1696
Country: FRA","Date: 2020-03-27
Deaths: 1995
Country: FRA","Date: 2020-03-28
Deaths: 2314
Country: FRA","Date: 2020-03-29
Deaths: 2606
Country: FRA","Date: 2020-03-30
Deaths: 3024
Country: FRA","Date: 2020-03-31
Deaths: 3523
Country: FRA","Date: 2020-04-01
Deaths: 4403
Country: FRA","Date: 2020-04-02
Deaths: 5387
Country: FRA","Date: 2020-04-03
Deaths: 6507
Country: FRA","Date: 2020-04-04
Deaths: 7560
Country: FRA","Date: 2020-04-05
Deaths: 8078
Country: FRA","Date: 2020-04-06
Deaths: 8911
Country: FRA","Date: 2020-04-07
Deaths: 10328
Country: FRA","Date: 2020-04-08
Deaths: 10869
Country: FRA","Date: 2020-04-09
Deaths: 12210
Country: FRA","Date: 2020-04-10
Deaths: 13197
Country: FRA","Date: 2020-04-11
Deaths: 13832
Country: FRA","Date: 2020-04-12
Deaths: 14393
Country: FRA","Date: 2020-04-13
Deaths: 14967
Country: FRA","Date: 2020-04-14
Deaths: 15729
Country: FRA","Date: 2020-04-15
Deaths: 17167
Country: FRA","Date: 2020-04-16
Deaths: 17920
Country: FRA","Date: 2020-04-17
Deaths: 18681
Country: FRA","Date: 2020-04-18
Deaths: 19323
Country: FRA","Date: 2020-04-19
Deaths: 19718
Country: FRA","Date: 2020-04-20
Deaths: 20265
Country: FRA","Date: 2020-04-21
Deaths: 20796
Country: FRA","Date: 2020-04-22
Deaths: 21340
Country: FRA","Date: 2020-04-23
Deaths: 21856
Country: FRA","Date: 2020-04-24
Deaths: 22245
Country: FRA","Date: 2020-04-25
Deaths: 22614
Country: FRA","Date: 2020-04-26
Deaths: 22856
Country: FRA","Date: 2020-04-27
Deaths: 23293
Country: FRA","Date: 2020-04-28
Deaths: 23660
Country: FRA","Date: 2020-04-29
Deaths: 24087
Country: FRA","Date: 2020-04-30
Deaths: 24376
Country: FRA","Date: 2020-05-01
Deaths: 24594
Country: FRA","Date: 2020-05-02
Deaths: 24760
Country: FRA","Date: 2020-05-03
Deaths: 24895
Country: FRA","Date: 2020-05-04
Deaths: 25201
Country: FRA","Date: 2020-05-05
Deaths: 25531
Country: FRA","Date: 2020-05-06
Deaths: 25809
Country: FRA","Date: 2020-05-07
Deaths: 25987
Country: FRA","Date: 2020-05-08
Deaths: 26230
Country: FRA","Date: 2020-05-09
Deaths: 26310
Country: FRA","Date: 2020-05-10
Deaths: 26380
Country: FRA","Date: 2020-05-11
Deaths: 26643
Country: FRA","Date: 2020-05-12
Deaths: 26991
Country: FRA","Date: 2020-05-13
Deaths: 27074
Country: FRA","Date: 2020-05-14
Deaths: 27425
Country: FRA","Date: 2020-05-15
Deaths: 27529
Country: FRA","Date: 2020-05-16
Deaths: 27625
Country: FRA","Date: 2020-05-17
Deaths: 28108
Country: FRA","Date: 2020-05-18
Deaths: 28239
Country: FRA","Date: 2020-05-19
Deaths: 28022
Country: FRA","Date: 2020-05-20
Deaths: 28132
Country: FRA","Date: 2020-05-21
Deaths: 28215
Country: FRA","Date: 2020-05-22
Deaths: 28289
Country: FRA","Date: 2020-05-23
Deaths: 28332
Country: FRA","Date: 2020-05-24
Deaths: 28367
Country: FRA"],"type":"scatter","mode":"lines","line":{"width":3.77952755905512,"color":"rgba(89,161,79,1)","dash":"solid"},"hoveron":"points","name":"FRA","legendgroup":"FRA","showlegend":true,"xaxis":"x","yaxis":"y","hoverinfo":"text","frame":null},{"x":[18322,18323,18324,18325,18326,18327,18328,18329,18330,18331,18332,18333,18334,18335,18336,18337,18338,18339,18340,18341,18342,18343,18344,18345,18346,18347,18348,18349,18350,18351,18352,18353,18354,18355,18356,18357,18358,18359,18360,18361,18362,18363,18364,18365,18366,18367,18368,18369,18370,18371,18372,18373,18374,18375,18376,18377,18378,18379,18380,18381,18382,18383,18384,18385,18386,18387,18388,18389,18390,18391,18392,18393,18394,18395,18396,18397,18398,18399,18400,18401,18402,18403,18404,18405,18406],"y":[0,0,0,0,0,1,2,2,3,7,7,9,10,28,43,65,81,115,158,194,250,285,359,508,694,877,1161,1455,1669,2043,2425,3095,3747,4461,5221,5865,6433,7471,8505,9608,10760,11599,12285,13029,14073,14915,15944,16879,17994,18492,19051,20223,21060,21787,22792,23635,24055,24393,25302,26097,26771,27510,28131,28446,28734,29427,30076,30615,31241,31587,31855,32065,32692,33186,33614,33998,34466,34636,34796,35341,35704,36042,36393,36675,36793],"text":["Date: 2020-03-01
Deaths: 0
Country: GBR","Date: 2020-03-02
Deaths: 0
Country: GBR","Date: 2020-03-03
Deaths: 0
Country: GBR","Date: 2020-03-04
Deaths: 0
Country: GBR","Date: 2020-03-05
Deaths: 0
Country: GBR","Date: 2020-03-06
Deaths: 1
Country: GBR","Date: 2020-03-07
Deaths: 2
Country: GBR","Date: 2020-03-08
Deaths: 2
Country: GBR","Date: 2020-03-09
Deaths: 3
Country: GBR","Date: 2020-03-10
Deaths: 7
Country: GBR","Date: 2020-03-11
Deaths: 7
Country: GBR","Date: 2020-03-12
Deaths: 9
Country: GBR","Date: 2020-03-13
Deaths: 10
Country: GBR","Date: 2020-03-14
Deaths: 28
Country: GBR","Date: 2020-03-15
Deaths: 43
Country: GBR","Date: 2020-03-16
Deaths: 65
Country: GBR","Date: 2020-03-17
Deaths: 81
Country: GBR","Date: 2020-03-18
Deaths: 115
Country: GBR","Date: 2020-03-19
Deaths: 158
Country: GBR","Date: 2020-03-20
Deaths: 194
Country: GBR","Date: 2020-03-21
Deaths: 250
Country: GBR","Date: 2020-03-22
Deaths: 285
Country: GBR","Date: 2020-03-23
Deaths: 359
Country: GBR","Date: 2020-03-24
Deaths: 508
Country: GBR","Date: 2020-03-25
Deaths: 694
Country: GBR","Date: 2020-03-26
Deaths: 877
Country: GBR","Date: 2020-03-27
Deaths: 1161
Country: GBR","Date: 2020-03-28
Deaths: 1455
Country: GBR","Date: 2020-03-29
Deaths: 1669
Country: GBR","Date: 2020-03-30
Deaths: 2043
Country: GBR","Date: 2020-03-31
Deaths: 2425
Country: GBR","Date: 2020-04-01
Deaths: 3095
Country: GBR","Date: 2020-04-02
Deaths: 3747
Country: GBR","Date: 2020-04-03
Deaths: 4461
Country: GBR","Date: 2020-04-04
Deaths: 5221
Country: GBR","Date: 2020-04-05
Deaths: 5865
Country: GBR","Date: 2020-04-06
Deaths: 6433
Country: GBR","Date: 2020-04-07
Deaths: 7471
Country: GBR","Date: 2020-04-08
Deaths: 8505
Country: GBR","Date: 2020-04-09
Deaths: 9608
Country: GBR","Date: 2020-04-10
Deaths: 10760
Country: GBR","Date: 2020-04-11
Deaths: 11599
Country: GBR","Date: 2020-04-12
Deaths: 12285
Country: GBR","Date: 2020-04-13
Deaths: 13029
Country: GBR","Date: 2020-04-14
Deaths: 14073
Country: GBR","Date: 2020-04-15
Deaths: 14915
Country: GBR","Date: 2020-04-16
Deaths: 15944
Country: GBR","Date: 2020-04-17
Deaths: 16879
Country: GBR","Date: 2020-04-18
Deaths: 17994
Country: GBR","Date: 2020-04-19
Deaths: 18492
Country: GBR","Date: 2020-04-20
Deaths: 19051
Country: GBR","Date: 2020-04-21
Deaths: 20223
Country: GBR","Date: 2020-04-22
Deaths: 21060
Country: GBR","Date: 2020-04-23
Deaths: 21787
Country: GBR","Date: 2020-04-24
Deaths: 22792
Country: GBR","Date: 2020-04-25
Deaths: 23635
Country: GBR","Date: 2020-04-26
Deaths: 24055
Country: GBR","Date: 2020-04-27
Deaths: 24393
Country: GBR","Date: 2020-04-28
Deaths: 25302
Country: GBR","Date: 2020-04-29
Deaths: 26097
Country: GBR","Date: 2020-04-30
Deaths: 26771
Country: GBR","Date: 2020-05-01
Deaths: 27510
Country: GBR","Date: 2020-05-02
Deaths: 28131
Country: GBR","Date: 2020-05-03
Deaths: 28446
Country: GBR","Date: 2020-05-04
Deaths: 28734
Country: GBR","Date: 2020-05-05
Deaths: 29427
Country: GBR","Date: 2020-05-06
Deaths: 30076
Country: GBR","Date: 2020-05-07
Deaths: 30615
Country: GBR","Date: 2020-05-08
Deaths: 31241
Country: GBR","Date: 2020-05-09
Deaths: 31587
Country: GBR","Date: 2020-05-10
Deaths: 31855
Country: GBR","Date: 2020-05-11
Deaths: 32065
Country: GBR","Date: 2020-05-12
Deaths: 32692
Country: GBR","Date: 2020-05-13
Deaths: 33186
Country: GBR","Date: 2020-05-14
Deaths: 33614
Country: GBR","Date: 2020-05-15
Deaths: 33998
Country: GBR","Date: 2020-05-16
Deaths: 34466
Country: GBR","Date: 2020-05-17
Deaths: 34636
Country: GBR","Date: 2020-05-18
Deaths: 34796
Country: GBR","Date: 2020-05-19
Deaths: 35341
Country: GBR","Date: 2020-05-20
Deaths: 35704
Country: GBR","Date: 2020-05-21
Deaths: 36042
Country: GBR","Date: 2020-05-22
Deaths: 36393
Country: GBR","Date: 2020-05-23
Deaths: 36675
Country: GBR","Date: 2020-05-24
Deaths: 36793
Country: GBR"],"type":"scatter","mode":"lines","line":{"width":3.77952755905512,"color":"rgba(237,201,72,1)","dash":"solid"},"hoveron":"points","name":"GBR","legendgroup":"GBR","showlegend":true,"xaxis":"x","yaxis":"y","hoverinfo":"text","frame":null},{"x":[18322,18323,18324,18325,18326,18327,18328,18329,18330,18331,18332,18333,18334,18335,18336,18337,18338,18339,18340,18341,18342,18343,18344,18345,18346,18347,18348,18349,18350,18351,18352,18353,18354,18355,18356,18357,18358,18359,18360,18361,18362,18363,18364,18365,18366,18367,18368,18369,18370,18371,18372,18373,18374,18375,18376,18377,18378,18379,18380,18381,18382,18383,18384,18385,18386,18387,18388,18389,18390,18391,18392,18393,18394,18395,18396,18397,18398,18399,18400,18401,18402,18403,18404,18405,18406],"y":[54,66,77,92,107,124,145,194,237,291,354,429,514,611,724,853,988,1135,1284,1433,1556,1685,1812,1934,2077,2234,2378,2517,2640,2757,2898,3036,3160,3294,3452,3603,3739,3872,3993,4110,4232,4357,4474,4585,4683,4777,4869,4958,5031,5118,5209,5297,5391,5481,5574,5650,5710,5806,5877,5957,6028,6091,6156,6203,6277,6340,6418,6486,6541,6589,6640,6685,6733,6783,6854,6902,6937,6988,7057,7119,7183,7249,7300,7359,7417],"text":["Date: 2020-03-01
Deaths: 54
Country: IRN","Date: 2020-03-02
Deaths: 66
Country: IRN","Date: 2020-03-03
Deaths: 77
Country: IRN","Date: 2020-03-04
Deaths: 92
Country: IRN","Date: 2020-03-05
Deaths: 107
Country: IRN","Date: 2020-03-06
Deaths: 124
Country: IRN","Date: 2020-03-07
Deaths: 145
Country: IRN","Date: 2020-03-08
Deaths: 194
Country: IRN","Date: 2020-03-09
Deaths: 237
Country: IRN","Date: 2020-03-10
Deaths: 291
Country: IRN","Date: 2020-03-11
Deaths: 354
Country: IRN","Date: 2020-03-12
Deaths: 429
Country: IRN","Date: 2020-03-13
Deaths: 514
Country: IRN","Date: 2020-03-14
Deaths: 611
Country: IRN","Date: 2020-03-15
Deaths: 724
Country: IRN","Date: 2020-03-16
Deaths: 853
Country: IRN","Date: 2020-03-17
Deaths: 988
Country: IRN","Date: 2020-03-18
Deaths: 1135
Country: IRN","Date: 2020-03-19
Deaths: 1284
Country: IRN","Date: 2020-03-20
Deaths: 1433
Country: IRN","Date: 2020-03-21
Deaths: 1556
Country: IRN","Date: 2020-03-22
Deaths: 1685
Country: IRN","Date: 2020-03-23
Deaths: 1812
Country: IRN","Date: 2020-03-24
Deaths: 1934
Country: IRN","Date: 2020-03-25
Deaths: 2077
Country: IRN","Date: 2020-03-26
Deaths: 2234
Country: IRN","Date: 2020-03-27
Deaths: 2378
Country: IRN","Date: 2020-03-28
Deaths: 2517
Country: IRN","Date: 2020-03-29
Deaths: 2640
Country: IRN","Date: 2020-03-30
Deaths: 2757
Country: IRN","Date: 2020-03-31
Deaths: 2898
Country: IRN","Date: 2020-04-01
Deaths: 3036
Country: IRN","Date: 2020-04-02
Deaths: 3160
Country: IRN","Date: 2020-04-03
Deaths: 3294
Country: IRN","Date: 2020-04-04
Deaths: 3452
Country: IRN","Date: 2020-04-05
Deaths: 3603
Country: IRN","Date: 2020-04-06
Deaths: 3739
Country: IRN","Date: 2020-04-07
Deaths: 3872
Country: IRN","Date: 2020-04-08
Deaths: 3993
Country: IRN","Date: 2020-04-09
Deaths: 4110
Country: IRN","Date: 2020-04-10
Deaths: 4232
Country: IRN","Date: 2020-04-11
Deaths: 4357
Country: IRN","Date: 2020-04-12
Deaths: 4474
Country: IRN","Date: 2020-04-13
Deaths: 4585
Country: IRN","Date: 2020-04-14
Deaths: 4683
Country: IRN","Date: 2020-04-15
Deaths: 4777
Country: IRN","Date: 2020-04-16
Deaths: 4869
Country: IRN","Date: 2020-04-17
Deaths: 4958
Country: IRN","Date: 2020-04-18
Deaths: 5031
Country: IRN","Date: 2020-04-19
Deaths: 5118
Country: IRN","Date: 2020-04-20
Deaths: 5209
Country: IRN","Date: 2020-04-21
Deaths: 5297
Country: IRN","Date: 2020-04-22
Deaths: 5391
Country: IRN","Date: 2020-04-23
Deaths: 5481
Country: IRN","Date: 2020-04-24
Deaths: 5574
Country: IRN","Date: 2020-04-25
Deaths: 5650
Country: IRN","Date: 2020-04-26
Deaths: 5710
Country: IRN","Date: 2020-04-27
Deaths: 5806
Country: IRN","Date: 2020-04-28
Deaths: 5877
Country: IRN","Date: 2020-04-29
Deaths: 5957
Country: IRN","Date: 2020-04-30
Deaths: 6028
Country: IRN","Date: 2020-05-01
Deaths: 6091
Country: IRN","Date: 2020-05-02
Deaths: 6156
Country: IRN","Date: 2020-05-03
Deaths: 6203
Country: IRN","Date: 2020-05-04
Deaths: 6277
Country: IRN","Date: 2020-05-05
Deaths: 6340
Country: IRN","Date: 2020-05-06
Deaths: 6418
Country: IRN","Date: 2020-05-07
Deaths: 6486
Country: IRN","Date: 2020-05-08
Deaths: 6541
Country: IRN","Date: 2020-05-09
Deaths: 6589
Country: IRN","Date: 2020-05-10
Deaths: 6640
Country: IRN","Date: 2020-05-11
Deaths: 6685
Country: IRN","Date: 2020-05-12
Deaths: 6733
Country: IRN","Date: 2020-05-13
Deaths: 6783
Country: IRN","Date: 2020-05-14
Deaths: 6854
Country: IRN","Date: 2020-05-15
Deaths: 6902
Country: IRN","Date: 2020-05-16
Deaths: 6937
Country: IRN","Date: 2020-05-17
Deaths: 6988
Country: IRN","Date: 2020-05-18
Deaths: 7057
Country: IRN","Date: 2020-05-19
Deaths: 7119
Country: IRN","Date: 2020-05-20
Deaths: 7183
Country: IRN","Date: 2020-05-21
Deaths: 7249
Country: IRN","Date: 2020-05-22
Deaths: 7300
Country: IRN","Date: 2020-05-23
Deaths: 7359
Country: IRN","Date: 2020-05-24
Deaths: 7417
Country: IRN"],"type":"scatter","mode":"lines","line":{"width":3.77952755905512,"color":"rgba(176,122,161,1)","dash":"solid"},"hoveron":"points","name":"IRN","legendgroup":"IRN","showlegend":true,"xaxis":"x","yaxis":"y","hoverinfo":"text","frame":null},{"x":[18322,18323,18324,18325,18326,18327,18328,18329,18330,18331,18332,18333,18334,18335,18336,18337,18338,18339,18340,18341,18342,18343,18344,18345,18346,18347,18348,18349,18350,18351,18352,18353,18354,18355,18356,18357,18358,18359,18360,18361,18362,18363,18364,18365,18366,18367,18368,18369,18370,18371,18372,18373,18374,18375,18376,18377,18378,18379,18380,18381,18382,18383,18384,18385,18386,18387,18388,18389,18390,18391,18392,18393,18394,18395,18396,18397,18398,18399,18400,18401,18402,18403,18404,18405,18406],"y":[34,52,79,107,148,197,233,366,463,631,827,1016,1266,1441,1809,2158,2503,2978,3405,4032,4825,5476,6077,6820,7503,8165,9134,10023,10779,11591,12428,13155,13915,14681,15362,15887,16523,17127,17669,18279,18849,19468,19899,20465,21067,21645,22170,22745,23227,23660,24114,24648,25085,25549,25969,26384,26644,26977,27359,27682,27967,28236,28710,28884,29079,29315,29684,29958,30201,30395,30560,30739,30911,31106,31368,31610,31763,31908,32007,32169,32330,32486,32616,32735,32785],"text":["Date: 2020-03-01
Deaths: 34
Country: ITA","Date: 2020-03-02
Deaths: 52
Country: ITA","Date: 2020-03-03
Deaths: 79
Country: ITA","Date: 2020-03-04
Deaths: 107
Country: ITA","Date: 2020-03-05
Deaths: 148
Country: ITA","Date: 2020-03-06
Deaths: 197
Country: ITA","Date: 2020-03-07
Deaths: 233
Country: ITA","Date: 2020-03-08
Deaths: 366
Country: ITA","Date: 2020-03-09
Deaths: 463
Country: ITA","Date: 2020-03-10
Deaths: 631
Country: ITA","Date: 2020-03-11
Deaths: 827
Country: ITA","Date: 2020-03-12
Deaths: 1016
Country: ITA","Date: 2020-03-13
Deaths: 1266
Country: ITA","Date: 2020-03-14
Deaths: 1441
Country: ITA","Date: 2020-03-15
Deaths: 1809
Country: ITA","Date: 2020-03-16
Deaths: 2158
Country: ITA","Date: 2020-03-17
Deaths: 2503
Country: ITA","Date: 2020-03-18
Deaths: 2978
Country: ITA","Date: 2020-03-19
Deaths: 3405
Country: ITA","Date: 2020-03-20
Deaths: 4032
Country: ITA","Date: 2020-03-21
Deaths: 4825
Country: ITA","Date: 2020-03-22
Deaths: 5476
Country: ITA","Date: 2020-03-23
Deaths: 6077
Country: ITA","Date: 2020-03-24
Deaths: 6820
Country: ITA","Date: 2020-03-25
Deaths: 7503
Country: ITA","Date: 2020-03-26
Deaths: 8165
Country: ITA","Date: 2020-03-27
Deaths: 9134
Country: ITA","Date: 2020-03-28
Deaths: 10023
Country: ITA","Date: 2020-03-29
Deaths: 10779
Country: ITA","Date: 2020-03-30
Deaths: 11591
Country: ITA","Date: 2020-03-31
Deaths: 12428
Country: ITA","Date: 2020-04-01
Deaths: 13155
Country: ITA","Date: 2020-04-02
Deaths: 13915
Country: ITA","Date: 2020-04-03
Deaths: 14681
Country: ITA","Date: 2020-04-04
Deaths: 15362
Country: ITA","Date: 2020-04-05
Deaths: 15887
Country: ITA","Date: 2020-04-06
Deaths: 16523
Country: ITA","Date: 2020-04-07
Deaths: 17127
Country: ITA","Date: 2020-04-08
Deaths: 17669
Country: ITA","Date: 2020-04-09
Deaths: 18279
Country: ITA","Date: 2020-04-10
Deaths: 18849
Country: ITA","Date: 2020-04-11
Deaths: 19468
Country: ITA","Date: 2020-04-12
Deaths: 19899
Country: ITA","Date: 2020-04-13
Deaths: 20465
Country: ITA","Date: 2020-04-14
Deaths: 21067
Country: ITA","Date: 2020-04-15
Deaths: 21645
Country: ITA","Date: 2020-04-16
Deaths: 22170
Country: ITA","Date: 2020-04-17
Deaths: 22745
Country: ITA","Date: 2020-04-18
Deaths: 23227
Country: ITA","Date: 2020-04-19
Deaths: 23660
Country: ITA","Date: 2020-04-20
Deaths: 24114
Country: ITA","Date: 2020-04-21
Deaths: 24648
Country: ITA","Date: 2020-04-22
Deaths: 25085
Country: ITA","Date: 2020-04-23
Deaths: 25549
Country: ITA","Date: 2020-04-24
Deaths: 25969
Country: ITA","Date: 2020-04-25
Deaths: 26384
Country: ITA","Date: 2020-04-26
Deaths: 26644
Country: ITA","Date: 2020-04-27
Deaths: 26977
Country: ITA","Date: 2020-04-28
Deaths: 27359
Country: ITA","Date: 2020-04-29
Deaths: 27682
Country: ITA","Date: 2020-04-30
Deaths: 27967
Country: ITA","Date: 2020-05-01
Deaths: 28236
Country: ITA","Date: 2020-05-02
Deaths: 28710
Country: ITA","Date: 2020-05-03
Deaths: 28884
Country: ITA","Date: 2020-05-04
Deaths: 29079
Country: ITA","Date: 2020-05-05
Deaths: 29315
Country: ITA","Date: 2020-05-06
Deaths: 29684
Country: ITA","Date: 2020-05-07
Deaths: 29958
Country: ITA","Date: 2020-05-08
Deaths: 30201
Country: ITA","Date: 2020-05-09
Deaths: 30395
Country: ITA","Date: 2020-05-10
Deaths: 30560
Country: ITA","Date: 2020-05-11
Deaths: 30739
Country: ITA","Date: 2020-05-12
Deaths: 30911
Country: ITA","Date: 2020-05-13
Deaths: 31106
Country: ITA","Date: 2020-05-14
Deaths: 31368
Country: ITA","Date: 2020-05-15
Deaths: 31610
Country: ITA","Date: 2020-05-16
Deaths: 31763
Country: ITA","Date: 2020-05-17
Deaths: 31908
Country: ITA","Date: 2020-05-18
Deaths: 32007
Country: ITA","Date: 2020-05-19
Deaths: 32169
Country: ITA","Date: 2020-05-20
Deaths: 32330
Country: ITA","Date: 2020-05-21
Deaths: 32486
Country: ITA","Date: 2020-05-22
Deaths: 32616
Country: ITA","Date: 2020-05-23
Deaths: 32735
Country: ITA","Date: 2020-05-24
Deaths: 32785
Country: ITA"],"type":"scatter","mode":"lines","line":{"width":3.77952755905512,"color":"rgba(255,157,167,1)","dash":"solid"},"hoveron":"points","name":"ITA","legendgroup":"ITA","showlegend":true,"xaxis":"x","yaxis":"y","hoverinfo":"text","frame":null},{"x":[18322,18323,18324,18325,18326,18327,18328,18329,18330,18331,18332,18333,18334,18335,18336,18337,18338,18339,18340,18341,18342,18343,18344,18345,18346,18347,18348,18349,18350,18351,18352,18353,18354,18355,18356,18357,18358,18359,18360,18361,18362,18363,18364,18365,18366,18367,18368,18369,18370,18371,18372,18373,18374,18375,18376,18377,18378,18379,18380,18381,18382,18383,18384,18385,18386,18387,18388,18389,18390,18391,18392,18393,18394,18395,18396,18397,18398,18399,18400,18401,18402,18403,18404,18405,18406],"y":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,2,3,4,5,6,8,12,16,20,28,29,37,50,60,79,94,125,141,174,194,233,273,296,332,406,449,486,546,650,686,712,857,970,1069,1221,1305,1351,1434,1569,1732,1859,1972,2061,2154,2271,2507,2704,2961,3160,3353,3465,3573,3926,4220,4477,4767,5045,5177,5332,5666,6090,6510,6989,7179,7394],"text":["Date: 2020-03-01
Deaths: 0
Country: MEX","Date: 2020-03-02
Deaths: 0
Country: MEX","Date: 2020-03-03
Deaths: 0
Country: MEX","Date: 2020-03-04
Deaths: 0
Country: MEX","Date: 2020-03-05
Deaths: 0
Country: MEX","Date: 2020-03-06
Deaths: 0
Country: MEX","Date: 2020-03-07
Deaths: 0
Country: MEX","Date: 2020-03-08
Deaths: 0
Country: MEX","Date: 2020-03-09
Deaths: 0
Country: MEX","Date: 2020-03-10
Deaths: 0
Country: MEX","Date: 2020-03-11
Deaths: 0
Country: MEX","Date: 2020-03-12
Deaths: 0
Country: MEX","Date: 2020-03-13
Deaths: 0
Country: MEX","Date: 2020-03-14
Deaths: 0
Country: MEX","Date: 2020-03-15
Deaths: 0
Country: MEX","Date: 2020-03-16
Deaths: 0
Country: MEX","Date: 2020-03-17
Deaths: 0
Country: MEX","Date: 2020-03-18
Deaths: 0
Country: MEX","Date: 2020-03-19
Deaths: 1
Country: MEX","Date: 2020-03-20
Deaths: 2
Country: MEX","Date: 2020-03-21
Deaths: 2
Country: MEX","Date: 2020-03-22
Deaths: 3
Country: MEX","Date: 2020-03-23
Deaths: 4
Country: MEX","Date: 2020-03-24
Deaths: 5
Country: MEX","Date: 2020-03-25
Deaths: 6
Country: MEX","Date: 2020-03-26
Deaths: 8
Country: MEX","Date: 2020-03-27
Deaths: 12
Country: MEX","Date: 2020-03-28
Deaths: 16
Country: MEX","Date: 2020-03-29
Deaths: 20
Country: MEX","Date: 2020-03-30
Deaths: 28
Country: MEX","Date: 2020-03-31
Deaths: 29
Country: MEX","Date: 2020-04-01
Deaths: 37
Country: MEX","Date: 2020-04-02
Deaths: 50
Country: MEX","Date: 2020-04-03
Deaths: 60
Country: MEX","Date: 2020-04-04
Deaths: 79
Country: MEX","Date: 2020-04-05
Deaths: 94
Country: MEX","Date: 2020-04-06
Deaths: 125
Country: MEX","Date: 2020-04-07
Deaths: 141
Country: MEX","Date: 2020-04-08
Deaths: 174
Country: MEX","Date: 2020-04-09
Deaths: 194
Country: MEX","Date: 2020-04-10
Deaths: 233
Country: MEX","Date: 2020-04-11
Deaths: 273
Country: MEX","Date: 2020-04-12
Deaths: 296
Country: MEX","Date: 2020-04-13
Deaths: 332
Country: MEX","Date: 2020-04-14
Deaths: 406
Country: MEX","Date: 2020-04-15
Deaths: 449
Country: MEX","Date: 2020-04-16
Deaths: 486
Country: MEX","Date: 2020-04-17
Deaths: 546
Country: MEX","Date: 2020-04-18
Deaths: 650
Country: MEX","Date: 2020-04-19
Deaths: 686
Country: MEX","Date: 2020-04-20
Deaths: 712
Country: MEX","Date: 2020-04-21
Deaths: 857
Country: MEX","Date: 2020-04-22
Deaths: 970
Country: MEX","Date: 2020-04-23
Deaths: 1069
Country: MEX","Date: 2020-04-24
Deaths: 1221
Country: MEX","Date: 2020-04-25
Deaths: 1305
Country: MEX","Date: 2020-04-26
Deaths: 1351
Country: MEX","Date: 2020-04-27
Deaths: 1434
Country: MEX","Date: 2020-04-28
Deaths: 1569
Country: MEX","Date: 2020-04-29
Deaths: 1732
Country: MEX","Date: 2020-04-30
Deaths: 1859
Country: MEX","Date: 2020-05-01
Deaths: 1972
Country: MEX","Date: 2020-05-02
Deaths: 2061
Country: MEX","Date: 2020-05-03
Deaths: 2154
Country: MEX","Date: 2020-05-04
Deaths: 2271
Country: MEX","Date: 2020-05-05
Deaths: 2507
Country: MEX","Date: 2020-05-06
Deaths: 2704
Country: MEX","Date: 2020-05-07
Deaths: 2961
Country: MEX","Date: 2020-05-08
Deaths: 3160
Country: MEX","Date: 2020-05-09
Deaths: 3353
Country: MEX","Date: 2020-05-10
Deaths: 3465
Country: MEX","Date: 2020-05-11
Deaths: 3573
Country: MEX","Date: 2020-05-12
Deaths: 3926
Country: MEX","Date: 2020-05-13
Deaths: 4220
Country: MEX","Date: 2020-05-14
Deaths: 4477
Country: MEX","Date: 2020-05-15
Deaths: 4767
Country: MEX","Date: 2020-05-16
Deaths: 5045
Country: MEX","Date: 2020-05-17
Deaths: 5177
Country: MEX","Date: 2020-05-18
Deaths: 5332
Country: MEX","Date: 2020-05-19
Deaths: 5666
Country: MEX","Date: 2020-05-20
Deaths: 6090
Country: MEX","Date: 2020-05-21
Deaths: 6510
Country: MEX","Date: 2020-05-22
Deaths: 6989
Country: MEX","Date: 2020-05-23
Deaths: 7179
Country: MEX","Date: 2020-05-24
Deaths: 7394
Country: MEX"],"type":"scatter","mode":"lines","line":{"width":3.77952755905512,"color":"rgba(156,117,95,1)","dash":"solid"},"hoveron":"points","name":"MEX","legendgroup":"MEX","showlegend":true,"xaxis":"x","yaxis":"y","hoverinfo":"text","frame":null},{"x":[18322,18323,18324,18325,18326,18327,18328,18329,18330,18331,18332,18333,18334,18335,18336,18337,18338,18339,18340,18341,18342,18343,18344,18345,18346,18347,18348,18349,18350,18351,18352,18353,18354,18355,18356,18357,18358,18359,18360,18361,18362,18363,18364,18365,18366,18367,18368,18369,18370,18371,18372,18373,18374,18375,18376,18377,18378,18379,18380,18381,18382,18383,18384,18385,18386,18387,18388,18389,18390,18391,18392,18393,18394,18395,18396,18397,18398,18399,18400,18401,18402,18403,18404,18405,18406],"y":[1,6,7,11,12,14,17,21,22,28,36,42,50,60,74,100,134,165,259,350,442,587,786,1011,1320,1726,2269,2744,3420,4196,5367,6511,7938,9260,10870,12375,13894,16191,18270,20288,22357,24366,26086,27870,30262,32760,34844,37428,39775,40945,42686,45086,47412,49724,51493,53755,54881,56219,58355,60967,62996,64943,66369,67682,68922,71064,73455,75662,77180,78795,79526,80682,82356,84119,85898,87530,88754,89562,90347,91921,93439,94702,95979,97087,97720],"text":["Date: 2020-03-01
Deaths: 1
Country: USA","Date: 2020-03-02
Deaths: 6
Country: USA","Date: 2020-03-03
Deaths: 7
Country: USA","Date: 2020-03-04
Deaths: 11
Country: USA","Date: 2020-03-05
Deaths: 12
Country: USA","Date: 2020-03-06
Deaths: 14
Country: USA","Date: 2020-03-07
Deaths: 17
Country: USA","Date: 2020-03-08
Deaths: 21
Country: USA","Date: 2020-03-09
Deaths: 22
Country: USA","Date: 2020-03-10
Deaths: 28
Country: USA","Date: 2020-03-11
Deaths: 36
Country: USA","Date: 2020-03-12
Deaths: 42
Country: USA","Date: 2020-03-13
Deaths: 50
Country: USA","Date: 2020-03-14
Deaths: 60
Country: USA","Date: 2020-03-15
Deaths: 74
Country: USA","Date: 2020-03-16
Deaths: 100
Country: USA","Date: 2020-03-17
Deaths: 134
Country: USA","Date: 2020-03-18
Deaths: 165
Country: USA","Date: 2020-03-19
Deaths: 259
Country: USA","Date: 2020-03-20
Deaths: 350
Country: USA","Date: 2020-03-21
Deaths: 442
Country: USA","Date: 2020-03-22
Deaths: 587
Country: USA","Date: 2020-03-23
Deaths: 786
Country: USA","Date: 2020-03-24
Deaths: 1011
Country: USA","Date: 2020-03-25
Deaths: 1320
Country: USA","Date: 2020-03-26
Deaths: 1726
Country: USA","Date: 2020-03-27
Deaths: 2269
Country: USA","Date: 2020-03-28
Deaths: 2744
Country: USA","Date: 2020-03-29
Deaths: 3420
Country: USA","Date: 2020-03-30
Deaths: 4196
Country: USA","Date: 2020-03-31
Deaths: 5367
Country: USA","Date: 2020-04-01
Deaths: 6511
Country: USA","Date: 2020-04-02
Deaths: 7938
Country: USA","Date: 2020-04-03
Deaths: 9260
Country: USA","Date: 2020-04-04
Deaths: 10870
Country: USA","Date: 2020-04-05
Deaths: 12375
Country: USA","Date: 2020-04-06
Deaths: 13894
Country: USA","Date: 2020-04-07
Deaths: 16191
Country: USA","Date: 2020-04-08
Deaths: 18270
Country: USA","Date: 2020-04-09
Deaths: 20288
Country: USA","Date: 2020-04-10
Deaths: 22357
Country: USA","Date: 2020-04-11
Deaths: 24366
Country: USA","Date: 2020-04-12
Deaths: 26086
Country: USA","Date: 2020-04-13
Deaths: 27870
Country: USA","Date: 2020-04-14
Deaths: 30262
Country: USA","Date: 2020-04-15
Deaths: 32760
Country: USA","Date: 2020-04-16
Deaths: 34844
Country: USA","Date: 2020-04-17
Deaths: 37428
Country: USA","Date: 2020-04-18
Deaths: 39775
Country: USA","Date: 2020-04-19
Deaths: 40945
Country: USA","Date: 2020-04-20
Deaths: 42686
Country: USA","Date: 2020-04-21
Deaths: 45086
Country: USA","Date: 2020-04-22
Deaths: 47412
Country: USA","Date: 2020-04-23
Deaths: 49724
Country: USA","Date: 2020-04-24
Deaths: 51493
Country: USA","Date: 2020-04-25
Deaths: 53755
Country: USA","Date: 2020-04-26
Deaths: 54881
Country: USA","Date: 2020-04-27
Deaths: 56219
Country: USA","Date: 2020-04-28
Deaths: 58355
Country: USA","Date: 2020-04-29
Deaths: 60967
Country: USA","Date: 2020-04-30
Deaths: 62996
Country: USA","Date: 2020-05-01
Deaths: 64943
Country: USA","Date: 2020-05-02
Deaths: 66369
Country: USA","Date: 2020-05-03
Deaths: 67682
Country: USA","Date: 2020-05-04
Deaths: 68922
Country: USA","Date: 2020-05-05
Deaths: 71064
Country: USA","Date: 2020-05-06
Deaths: 73455
Country: USA","Date: 2020-05-07
Deaths: 75662
Country: USA","Date: 2020-05-08
Deaths: 77180
Country: USA","Date: 2020-05-09
Deaths: 78795
Country: USA","Date: 2020-05-10
Deaths: 79526
Country: USA","Date: 2020-05-11
Deaths: 80682
Country: USA","Date: 2020-05-12
Deaths: 82356
Country: USA","Date: 2020-05-13
Deaths: 84119
Country: USA","Date: 2020-05-14
Deaths: 85898
Country: USA","Date: 2020-05-15
Deaths: 87530
Country: USA","Date: 2020-05-16
Deaths: 88754
Country: USA","Date: 2020-05-17
Deaths: 89562
Country: USA","Date: 2020-05-18
Deaths: 90347
Country: USA","Date: 2020-05-19
Deaths: 91921
Country: USA","Date: 2020-05-20
Deaths: 93439
Country: USA","Date: 2020-05-21
Deaths: 94702
Country: USA","Date: 2020-05-22
Deaths: 95979
Country: USA","Date: 2020-05-23
Deaths: 97087
Country: USA","Date: 2020-05-24
Deaths: 97720
Country: USA"],"type":"scatter","mode":"lines","line":{"width":3.77952755905512,"color":"rgba(186,176,172,1)","dash":"solid"},"hoveron":"points","name":"USA","legendgroup":"USA","showlegend":true,"xaxis":"x","yaxis":"y","hoverinfo":"text","frame":null}],"layout":{"margin":{"t":47.4819427148194,"r":7.30593607305936,"b":41.51100041511,"l":46.027397260274},"font":{"color":"rgba(0,0,0,1)","family":"","size":14.6118721461187},"title":{"text":"Total deaths due to COVID-19","font":{"color":"rgba(0,0,0,1)","family":"","size":21.2536322125363},"x":0,"xref":"paper"},"xaxis":{"domain":[0,1],"automargin":true,"type":"linear","autorange":false,"range":[18317.8,18410.2],"tickmode":"array","ticktext":["feb.","mar.","abr.","may.","jun."],"tickvals":[null,18322,18353,18383,null],"categoryorder":"array","categoryarray":["feb.","mar.","abr.","may.","jun."],"nticks":null,"ticks":"","tickcolor":null,"ticklen":3.65296803652968,"tickwidth":0,"showticklabels":true,"tickfont":{"color":"rgba(77,77,77,1)","family":"","size":11.689497716895},"tickangle":-0,"showline":false,"linecolor":null,"linewidth":0,"showgrid":false,"gridcolor":null,"gridwidth":0,"zeroline":false,"anchor":"y","title":{"text":"Date","font":{"color":"rgba(0,0,0,1)","family":"","size":15.9402241594022}},"hoverformat":".2f"},"yaxis":{"domain":[0,1],"automargin":true,"type":"linear","autorange":false,"range":[-4886,102606],"tickmode":"array","ticktext":["0","25000","50000","75000","100000"],"tickvals":[0,25000,50000,75000,100000],"categoryorder":"array","categoryarray":["0","25000","50000","75000","100000"],"nticks":null,"ticks":"","tickcolor":null,"ticklen":3.65296803652968,"tickwidth":0,"showticklabels":true,"tickfont":{"color":"rgba(77,77,77,1)","family":"","size":11.689497716895},"tickangle":-0,"showline":false,"linecolor":null,"linewidth":0,"showgrid":true,"gridcolor":"rgba(235,235,235,1)","gridwidth":0.66417600664176,"zeroline":false,"anchor":"x","title":{"text":"","font":{"color":null,"family":null,"size":0}},"hoverformat":".2f"},"shapes":[{"type":"rect","fillcolor":null,"line":{"color":null,"width":0,"linetype":[]},"yref":"paper","xref":"paper","x0":0,"x1":1,"y0":0,"y1":1}],"showlegend":true,"legend":{"bgcolor":null,"bordercolor":null,"borderwidth":0,"font":{"color":"rgba(0,0,0,1)","family":"","size":11.689497716895},"y":0,"orientation":"h"},"hovermode":"closest","barmode":"relative","annotations":[{"x":1,"y":1.05,"text":"Source: covid19datahub.io","showarrow":false,"xref":"paper","yref":"paper","font":{"size":10}}]},"config":{"doubleClick":"reset","showSendToCloud":false},"source":"A","attrs":{"1cd02d5b3246":{"x":{},"y":{},"colour":{},"type":"scatter"}},"cur_data":"1cd02d5b3246","visdat":{"1cd02d5b3246":["function (y) ","x"]},"highlight":{"on":"plotly_click","persistent":false,"dynamic":false,"selectize":false,"opacityDim":0.2,"selected":{"opacity":1},"debounce":0},"shinyEvents":["plotly_hover","plotly_click","plotly_selected","plotly_relayout","plotly_brushed","plotly_brushing","plotly_clickannotation","plotly_doubleclick","plotly_deselect","plotly_afterplot","plotly_sunburstclick"],"base_url":"https://plot.ly"},"evals":[],"jsHooks":[]}

What about the countries most affected by the virus in deaths relative to the population? Pretty basic too.

ggplotly( covid_deaths %>% get_top_countries_df(top_by = Deaths_by_1Mpop, top_n = 10, since = 20200301) %>% select(-Deaths) %>% rename(Deaths = Deaths_by_1Mpop) %>% ggplot(aes(Date, Deaths, col = Country)) + geom_line(size = 1, show.legend = F) + labs(title = "Total deaths per million people", caption = "Source: covid19datahub.io") + theme_minimal() + theme_custom() + scale_color_tableau() + NULL ) %>% layout(legend = list(orientation = "h", y = 0), annotations = list( x = 1, y = 1.05, text = "Source: covid19datahub.io", showarrow = F, xref = 'paper', yref = 'paper', font = list(size = 10) ) )

{"x":{"data":[{"x":[18322,18323,18324,18325,18326,18327,18328,18329,18330,18331,18332,18333,18334,18335,18336,18337,18338,18339,18340,18341,18342,18343,18344,18345,18346,18347,18348,18349,18350,18351,18352,18353,18354,18355,18356,18357,18358,18359,18360,18361,18362,18363,18364,18365,18366,18367,18368,18369,18370,18371,18372,18373,18374,18375,18376,18377,18378,18379,18380,18381,18382,18383,18384,18385,18386,18387,18388,18389,18390,18391,18392,18393,18394,18395,18396,18397,18398,18399,18400,18401,18402,18403,18404,18405,18406],"y":[0,0,0,0,0,0,0,0,0,0,0,0,1,1,2,3,3,5,8,10,14,17,24,31,39,48,59,70,83,97,114,135,153,174,198,218,242,269,297,321,348,374,404,429,453,477,499,517,536,554,572,589,606,621,634,648,663,674,684,693,700,709,715,724,732,739,746,753,759,765,771,777,781,786,790,793,795,798,802,804,807,809,813,815,817],"text":["Date: 2020-03-01
Deaths: 0
Country: BEL","Date: 2020-03-02
Deaths: 0
Country: BEL","Date: 2020-03-03
Deaths: 0
Country: BEL","Date: 2020-03-04
Deaths: 0
Country: BEL","Date: 2020-03-05
Deaths: 0
Country: BEL","Date: 2020-03-06
Deaths: 0
Country: BEL","Date: 2020-03-07
Deaths: 0
Country: BEL","Date: 2020-03-08
Deaths: 0
Country: BEL","Date: 2020-03-09
Deaths: 0
Country: BEL","Date: 2020-03-10
Deaths: 0
Country: BEL","Date: 2020-03-11
Deaths: 0
Country: BEL","Date: 2020-03-12
Deaths: 0
Country: BEL","Date: 2020-03-13
Deaths: 1
Country: BEL","Date: 2020-03-14
Deaths: 1
Country: BEL","Date: 2020-03-15
Deaths: 2
Country: BEL","Date: 2020-03-16
Deaths: 3
Country: BEL","Date: 2020-03-17
Deaths: 3
Country: BEL","Date: 2020-03-18
Deaths: 5
Country: BEL","Date: 2020-03-19
Deaths: 8
Country: BEL","Date: 2020-03-20
Deaths: 10
Country: BEL","Date: 2020-03-21
Deaths: 14
Country: BEL","Date: 2020-03-22
Deaths: 17
Country: BEL","Date: 2020-03-23
Deaths: 24
Country: BEL","Date: 2020-03-24
Deaths: 31
Country: BEL","Date: 2020-03-25
Deaths: 39
Country: BEL","Date: 2020-03-26
Deaths: 48
Country: BEL","Date: 2020-03-27
Deaths: 59
Country: BEL","Date: 2020-03-28
Deaths: 70
Country: BEL","Date: 2020-03-29
Deaths: 83
Country: BEL","Date: 2020-03-30
Deaths: 97
Country: BEL","Date: 2020-03-31
Deaths: 114
Country: BEL","Date: 2020-04-01
Deaths: 135
Country: BEL","Date: 2020-04-02
Deaths: 153
Country: BEL","Date: 2020-04-03
Deaths: 174
Country: BEL","Date: 2020-04-04
Deaths: 198
Country: BEL","Date: 2020-04-05
Deaths: 218
Country: BEL","Date: 2020-04-06
Deaths: 242
Country: BEL","Date: 2020-04-07
Deaths: 269
Country: BEL","Date: 2020-04-08
Deaths: 297
Country: BEL","Date: 2020-04-09
Deaths: 321
Country: BEL","Date: 2020-04-10
Deaths: 348
Country: BEL","Date: 2020-04-11
Deaths: 374
Country: BEL","Date: 2020-04-12
Deaths: 404
Country: BEL","Date: 2020-04-13
Deaths: 429
Country: BEL","Date: 2020-04-14
Deaths: 453
Country: BEL","Date: 2020-04-15
Deaths: 477
Country: BEL","Date: 2020-04-16
Deaths: 499
Country: BEL","Date: 2020-04-17
Deaths: 517
Country: BEL","Date: 2020-04-18
Deaths: 536
Country: BEL","Date: 2020-04-19
Deaths: 554
Country: BEL","Date: 2020-04-20
Deaths: 572
Country: BEL","Date: 2020-04-21
Deaths: 589
Country: BEL","Date: 2020-04-22
Deaths: 606
Country: BEL","Date: 2020-04-23
Deaths: 621
Country: BEL","Date: 2020-04-24
Deaths: 634
Country: BEL","Date: 2020-04-25
Deaths: 648
Country: BEL","Date: 2020-04-26
Deaths: 663
Country: BEL","Date: 2020-04-27
Deaths: 674
Country: BEL","Date: 2020-04-28
Deaths: 684
Country: BEL","Date: 2020-04-29
Deaths: 693
Country: BEL","Date: 2020-04-30
Deaths: 700
Country: BEL","Date: 2020-05-01
Deaths: 709
Country: BEL","Date: 2020-05-02
Deaths: 715
Country: BEL","Date: 2020-05-03
Deaths: 724
Country: BEL","Date: 2020-05-04
Deaths: 732
Country: BEL","Date: 2020-05-05
Deaths: 739
Country: BEL","Date: 2020-05-06
Deaths: 746
Country: BEL","Date: 2020-05-07
Deaths: 753
Country: BEL","Date: 2020-05-08
Deaths: 759
Country: BEL","Date: 2020-05-09
Deaths: 765
Country: BEL","Date: 2020-05-10
Deaths: 771
Country: BEL","Date: 2020-05-11
Deaths: 777
Country: BEL","Date: 2020-05-12
Deaths: 781
Country: BEL","Date: 2020-05-13
Deaths: 786
Country: BEL","Date: 2020-05-14
Deaths: 790
Country: BEL","Date: 2020-05-15
Deaths: 793
Country: BEL","Date: 2020-05-16
Deaths: 795
Country: BEL","Date: 2020-05-17
Deaths: 798
Country: BEL","Date: 2020-05-18
Deaths: 802
Country: BEL","Date: 2020-05-19
Deaths: 804
Country: BEL","Date: 2020-05-20
Deaths: 807
Country: BEL","Date: 2020-05-21
Deaths: 809
Country: BEL","Date: 2020-05-22
Deaths: 813
Country: BEL","Date: 2020-05-23
Deaths: 815
Country: BEL","Date: 2020-05-24
Deaths: 817
Country: BEL"],"type":"scatter","mode":"lines","line":{"width":3.77952755905512,"color":"rgba(78,121,167,1)","dash":"solid"},"hoveron":"points","name":"BEL","legendgroup":"BEL","showlegend":true,"xaxis":"x","yaxis":"y","hoverinfo":"text","frame":null},{"x":[18322,18323,18324,18325,18326,18327,18328,18329,18330,18331,18332,18333,18334,18335,18336,18337,18338,18339,18340,18341,18342,18343,18344,18345,18346,18347,18348,18349,18350,18351,18352,18353,18354,18355,18356,18357,18358,18359,18360,18361,18362,18363,18364,18365,18366,18367,18368,18369,18370,18371,18372,18373,18374,18375,18376,18377,18378,18379,18380,18381,18382,18383,18384,18385,18386,18387,18388,18389,18390,18391,18392,18393,18394,18395,18396,18397,18398,18399,18400,18401,18402,18403,18404,18405,18406],"y":[0,0,0,0,0,0,0,0,0,0,0,0,1,2,2,2,3,3,5,6,9,12,14,14,18,22,27,31,35,42,51,57,63,69,78,84,90,96,105,111,118,122,130,134,138,146,150,156,161,164,168,174,177,182,187,188,189,196,200,202,204,206,207,207,210,211,212,213,214,215,215,217,219,220,220,221,221,221,222,222,222,223,224,224,224],"text":["Date: 2020-03-01
Deaths: 0
Country: CHE","Date: 2020-03-02
Deaths: 0
Country: CHE","Date: 2020-03-03
Deaths: 0
Country: CHE","Date: 2020-03-04
Deaths: 0
Country: CHE","Date: 2020-03-05
Deaths: 0
Country: CHE","Date: 2020-03-06
Deaths: 0
Country: CHE","Date: 2020-03-07
Deaths: 0
Country: CHE","Date: 2020-03-08
Deaths: 0
Country: CHE","Date: 2020-03-09
Deaths: 0
Country: CHE","Date: 2020-03-10
Deaths: 0
Country: CHE","Date: 2020-03-11
Deaths: 0
Country: CHE","Date: 2020-03-12
Deaths: 0
Country: CHE","Date: 2020-03-13
Deaths: 1
Country: CHE","Date: 2020-03-14
Deaths: 2
Country: CHE","Date: 2020-03-15
Deaths: 2
Country: CHE","Date: 2020-03-16
Deaths: 2
Country: CHE","Date: 2020-03-17
Deaths: 3
Country: CHE","Date: 2020-03-18
Deaths: 3
Country: CHE","Date: 2020-03-19
Deaths: 5
Country: CHE","Date: 2020-03-20
Deaths: 6
Country: CHE","Date: 2020-03-21
Deaths: 9
Country: CHE","Date: 2020-03-22
Deaths: 12
Country: CHE","Date: 2020-03-23
Deaths: 14
Country: CHE","Date: 2020-03-24
Deaths: 14
Country: CHE","Date: 2020-03-25
Deaths: 18
Country: CHE","Date: 2020-03-26
Deaths: 22
Country: CHE","Date: 2020-03-27
Deaths: 27
Country: CHE","Date: 2020-03-28
Deaths: 31
Country: CHE","Date: 2020-03-29
Deaths: 35
Country: CHE","Date: 2020-03-30
Deaths: 42
Country: CHE","Date: 2020-03-31
Deaths: 51
Country: CHE","Date: 2020-04-01
Deaths: 57
Country: CHE","Date: 2020-04-02
Deaths: 63
Country: CHE","Date: 2020-04-03
Deaths: 69
Country: CHE","Date: 2020-04-04
Deaths: 78
Country: CHE","Date: 2020-04-05
Deaths: 84
Country: CHE","Date: 2020-04-06
Deaths: 90
Country: CHE","Date: 2020-04-07
Deaths: 96
Country: CHE","Date: 2020-04-08
Deaths: 105
Country: CHE","Date: 2020-04-09
Deaths: 111
Country: CHE","Date: 2020-04-10
Deaths: 118
Country: CHE","Date: 2020-04-11
Deaths: 122
Country: CHE","Date: 2020-04-12
Deaths: 130
Country: CHE","Date: 2020-04-13
Deaths: 134
Country: CHE","Date: 2020-04-14
Deaths: 138
Country: CHE","Date: 2020-04-15
Deaths: 146
Country: CHE","Date: 2020-04-16
Deaths: 150
Country: CHE","Date: 2020-04-17
Deaths: 156
Country: CHE","Date: 2020-04-18
Deaths: 161
Country: CHE","Date: 2020-04-19
Deaths: 164
Country: CHE","Date: 2020-04-20
Deaths: 168
Country: CHE","Date: 2020-04-21
Deaths: 174
Country: CHE","Date: 2020-04-22
Deaths: 177
Country: CHE","Date: 2020-04-23
Deaths: 182
Country: CHE","Date: 2020-04-24
Deaths: 187
Country: CHE","Date: 2020-04-25
Deaths: 188
Country: CHE","Date: 2020-04-26
Deaths: 189
Country: CHE","Date: 2020-04-27
Deaths: 196
Country: CHE","Date: 2020-04-28
Deaths: 200
Country: CHE","Date: 2020-04-29
Deaths: 202
Country: CHE","Date: 2020-04-30
Deaths: 204
Country: CHE","Date: 2020-05-01
Deaths: 206
Country: CHE","Date: 2020-05-02
Deaths: 207
Country: CHE","Date: 2020-05-03
Deaths: 207
Country: CHE","Date: 2020-05-04
Deaths: 210
Country: CHE","Date: 2020-05-05
Deaths: 211
Country: CHE","Date: 2020-05-06
Deaths: 212
Country: CHE","Date: 2020-05-07
Deaths: 213
Country: CHE","Date: 2020-05-08
Deaths: 214
Country: CHE","Date: 2020-05-09
Deaths: 215
Country: CHE","Date: 2020-05-10
Deaths: 215
Country: CHE","Date: 2020-05-11
Deaths: 217
Country: CHE","Date: 2020-05-12
Deaths: 219
Country: CHE","Date: 2020-05-13
Deaths: 220
Country: CHE","Date: 2020-05-14
Deaths: 220
Country: CHE","Date: 2020-05-15
Deaths: 221
Country: CHE","Date: 2020-05-16
Deaths: 221
Country: CHE","Date: 2020-05-17
Deaths: 221
Country: CHE","Date: 2020-05-18
Deaths: 222
Country: CHE","Date: 2020-05-19
Deaths: 222
Country: CHE","Date: 2020-05-20
Deaths: 222
Country: CHE","Date: 2020-05-21
Deaths: 223
Country: CHE","Date: 2020-05-22
Deaths: 224
Country: CHE","Date: 2020-05-23
Deaths: 224
Country: CHE","Date: 2020-05-24
Deaths: 224
Country: CHE"],"type":"scatter","mode":"lines","line":{"width":3.77952755905512,"color":"rgba(242,142,43,1)","dash":"solid"},"hoveron":"points","name":"CHE","legendgroup":"CHE","showlegend":true,"xaxis":"x","yaxis":"y","hoverinfo":"text","frame":null},{"x":[18322,18323,18324,18325,18326,18327,18328,18329,18330,18331,18332,18333,18334,18335,18336,18337,18338,18339,18340,18341,18342,18343,18344,18345,18346,18347,18348,18349,18350,18351,18352,18353,18354,18355,18356,18357,18358,18359,18360,18361,18362,18363,18364,18365,18366,18367,18368,18369,18370,18371,18372,18373,18374,18375,18376,18377,18378,18379,18380,18381,18382,18383,18384,18385,18386,18387,18388,18389,18390,18391,18392,18393,18394,18395,18396,18397,18398,18399,18400,18401,18402,18403,18404,18405,18406],"y":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,2,2,2,2,3,3,4,4,5,7,8,10,11,11,11,14,16,17,18,19,21,22,23,24,25,27,28,30,30,31,33,34,34,34,39,51,52,53,62,80,92,92,92,95,97,100,101,124,126,136,137,137,152,157,160,164,166,169,172,179,181,182],"text":["Date: 2020-03-01
Deaths: 0
Country: ECU","Date: 2020-03-02
Deaths: 0
Country: ECU","Date: 2020-03-03
Deaths: 0
Country: ECU","Date: 2020-03-04
Deaths: 0
Country: ECU","Date: 2020-03-05
Deaths: 0
Country: ECU","Date: 2020-03-06
Deaths: 0
Country: ECU","Date: 2020-03-07
Deaths: 0
Country: ECU","Date: 2020-03-08
Deaths: 0
Country: ECU","Date: 2020-03-09
Deaths: 0
Country: ECU","Date: 2020-03-10
Deaths: 0
Country: ECU","Date: 2020-03-11
Deaths: 0
Country: ECU","Date: 2020-03-12
Deaths: 0
Country: ECU","Date: 2020-03-13
Deaths: 0
Country: ECU","Date: 2020-03-14
Deaths: 0
Country: ECU","Date: 2020-03-15
Deaths: 0
Country: ECU","Date: 2020-03-16
Deaths: 0
Country: ECU","Date: 2020-03-17
Deaths: 0
Country: ECU","Date: 2020-03-18
Deaths: 0
Country: ECU","Date: 2020-03-19
Deaths: 0
Country: ECU","Date: 2020-03-20
Deaths: 0
Country: ECU","Date: 2020-03-21
Deaths: 0
Country: ECU","Date: 2020-03-22
Deaths: 1
Country: ECU","Date: 2020-03-23
Deaths: 1
Country: ECU","Date: 2020-03-24
Deaths: 2
Country: ECU","Date: 2020-03-25
Deaths: 2
Country: ECU","Date: 2020-03-26
Deaths: 2
Country: ECU","Date: 2020-03-27
Deaths: 2
Country: ECU","Date: 2020-03-28
Deaths: 3
Country: ECU","Date: 2020-03-29
Deaths: 3
Country: ECU","Date: 2020-03-30
Deaths: 4
Country: ECU","Date: 2020-03-31
Deaths: 4
Country: ECU","Date: 2020-04-01
Deaths: 5
Country: ECU","Date: 2020-04-02
Deaths: 7
Country: ECU","Date: 2020-04-03
Deaths: 8
Country: ECU","Date: 2020-04-04
Deaths: 10
Country: ECU","Date: 2020-04-05
Deaths: 11
Country: ECU","Date: 2020-04-06
Deaths: 11
Country: ECU","Date: 2020-04-07
Deaths: 11
Country: ECU","Date: 2020-04-08
Deaths: 14
Country: ECU","Date: 2020-04-09
Deaths: 16
Country: ECU","Date: 2020-04-10
Deaths: 17
Country: ECU","Date: 2020-04-11
Deaths: 18
Country: ECU","Date: 2020-04-12
Deaths: 19
Country: ECU","Date: 2020-04-13
Deaths: 21
Country: ECU","Date: 2020-04-14
Deaths: 22
Country: ECU","Date: 2020-04-15
Deaths: 23
Country: ECU","Date: 2020-04-16
Deaths: 24
Country: ECU","Date: 2020-04-17
Deaths: 25
Country: ECU","Date: 2020-04-18
Deaths: 27
Country: ECU","Date: 2020-04-19
Deaths: 28
Country: ECU","Date: 2020-04-20
Deaths: 30
Country: ECU","Date: 2020-04-21
Deaths: 30
Country: ECU","Date: 2020-04-22
Deaths: 31
Country: ECU","Date: 2020-04-23
Deaths: 33
Country: ECU","Date: 2020-04-24
Deaths: 34
Country: ECU","Date: 2020-04-25
Deaths: 34
Country: ECU","Date: 2020-04-26
Deaths: 34
Country: ECU","Date: 2020-04-27
Deaths: 39
Country: ECU","Date: 2020-04-28
Deaths: 51
Country: ECU","Date: 2020-04-29
Deaths: 52
Country: ECU","Date: 2020-04-30
Deaths: 53
Country: ECU","Date: 2020-05-01
Deaths: 62
Country: ECU","Date: 2020-05-02
Deaths: 80
Country: ECU","Date: 2020-05-03
Deaths: 92
Country: ECU","Date: 2020-05-04
Deaths: 92
Country: ECU","Date: 2020-05-05
Deaths: 92
Country: ECU","Date: 2020-05-06
Deaths: 95
Country: ECU","Date: 2020-05-07
Deaths: 97
Country: ECU","Date: 2020-05-08
Deaths: 100
Country: ECU","Date: 2020-05-09
Deaths: 101
Country: ECU","Date: 2020-05-10
Deaths: 124
Country: ECU","Date: 2020-05-11
Deaths: 126
Country: ECU","Date: 2020-05-12
Deaths: 136
Country: ECU","Date: 2020-05-13
Deaths: 137
Country: ECU","Date: 2020-05-14
Deaths: 137
Country: ECU","Date: 2020-05-15
Deaths: 152
Country: ECU","Date: 2020-05-16
Deaths: 157
Country: ECU","Date: 2020-05-17
Deaths: 160
Country: ECU","Date: 2020-05-18
Deaths: 164
Country: ECU","Date: 2020-05-19
Deaths: 166
Country: ECU","Date: 2020-05-20
Deaths: 169
Country: ECU","Date: 2020-05-21
Deaths: 172
Country: ECU","Date: 2020-05-22
Deaths: 179
Country: ECU","Date: 2020-05-23
Deaths: 181
Country: ECU","Date: 2020-05-24
Deaths: 182
Country: ECU"],"type":"scatter","mode":"lines","line":{"width":3.77952755905512,"color":"rgba(225,87,89,1)","dash":"solid"},"hoveron":"points","name":"ECU","legendgroup":"ECU","showlegend":true,"xaxis":"x","yaxis":"y","hoverinfo":"text","frame":null},{"x":[18322,18323,18324,18325,18326,18327,18328,18329,18330,18331,18332,18333,18334,18335,18336,18337,18338,18339,18340,18341,18342,18343,18344,18345,18346,18347,18348,18349,18350,18351,18352,18353,18354,18355,18356,18357,18358,18359,18360,18361,18362,18363,18364,18365,18366,18367,18368,18369,18370,18371,18372,18373,18374,18375,18376,18377,18378,18379,18380,18381,18382,18383,18384,18385,18386,18387,18388,18389,18390,18391,18392,18393,18394,18395,18396,18397,18398,18399,18400,18401,18402,18403,18404,18405,18406],"y":[0,0,0,0,0,0,0,0,1,1,1,1,3,4,6,7,11,13,18,22,29,38,49,60,78,93,110,128,145,165,181,201,221,239,255,270,285,300,316,330,344,355,368,379,386,400,413,427,428,437,446,455,464,473,481,489,496,503,509,519,524,524,536,540,543,547,553,557,562,566,569,571,575,579,584,587,589,589,592,594,596,597,612,613,614],"text":["Date: 2020-03-01
Deaths: 0
Country: ESP","Date: 2020-03-02
Deaths: 0
Country: ESP","Date: 2020-03-03
Deaths: 0
Country: ESP","Date: 2020-03-04
Deaths: 0
Country: ESP","Date: 2020-03-05
Deaths: 0
Country: ESP","Date: 2020-03-06
Deaths: 0
Country: ESP","Date: 2020-03-07
Deaths: 0
Country: ESP","Date: 2020-03-08
Deaths: 0
Country: ESP","Date: 2020-03-09
Deaths: 1
Country: ESP","Date: 2020-03-10
Deaths: 1
Country: ESP","Date: 2020-03-11
Deaths: 1
Country: ESP","Date: 2020-03-12
Deaths: 1
Country: ESP","Date: 2020-03-13
Deaths: 3
Country: ESP","Date: 2020-03-14
Deaths: 4
Country: ESP","Date: 2020-03-15
Deaths: 6
Country: ESP","Date: 2020-03-16
Deaths: 7
Country: ESP","Date: 2020-03-17
Deaths: 11
Country: ESP","Date: 2020-03-18
Deaths: 13
Country: ESP","Date: 2020-03-19
Deaths: 18
Country: ESP","Date: 2020-03-20
Deaths: 22
Country: ESP","Date: 2020-03-21
Deaths: 29
Country: ESP","Date: 2020-03-22
Deaths: 38
Country: ESP","Date: 2020-03-23
Deaths: 49
Country: ESP","Date: 2020-03-24
Deaths: 60
Country: ESP","Date: 2020-03-25
Deaths: 78
Country: ESP","Date: 2020-03-26
Deaths: 93
Country: ESP","Date: 2020-03-27
Deaths: 110
Country: ESP","Date: 2020-03-28
Deaths: 128
Country: ESP","Date: 2020-03-29
Deaths: 145
Country: ESP","Date: 2020-03-30
Deaths: 165
Country: ESP","Date: 2020-03-31
Deaths: 181
Country: ESP","Date: 2020-04-01
Deaths: 201
Country: ESP","Date: 2020-04-02
Deaths: 221
Country: ESP","Date: 2020-04-03
Deaths: 239
Country: ESP","Date: 2020-04-04
Deaths: 255
Country: ESP","Date: 2020-04-05
Deaths: 270
Country: ESP","Date: 2020-04-06
Deaths: 285
Country: ESP","Date: 2020-04-07
Deaths: 300
Country: ESP","Date: 2020-04-08
Deaths: 316
Country: ESP","Date: 2020-04-09
Deaths: 330
Country: ESP","Date: 2020-04-10
Deaths: 344
Country: ESP","Date: 2020-04-11
Deaths: 355
Country: ESP","Date: 2020-04-12
Deaths: 368
Country: ESP","Date: 2020-04-13
Deaths: 379
Country: ESP","Date: 2020-04-14
Deaths: 386
Country: ESP","Date: 2020-04-15
Deaths: 400
Country: ESP","Date: 2020-04-16
Deaths: 413
Country: ESP","Date: 2020-04-17
Deaths: 427
Country: ESP","Date: 2020-04-18
Deaths: 428
Country: ESP","Date: 2020-04-19
Deaths: 437
Country: ESP","Date: 2020-04-20
Deaths: 446
Country: ESP","Date: 2020-04-21
Deaths: 455
Country: ESP","Date: 2020-04-22
Deaths: 464
Country: ESP","Date: 2020-04-23
Deaths: 473
Country: ESP","Date: 2020-04-24
Deaths: 481
Country: ESP","Date: 2020-04-25
Deaths: 489
Country: ESP","Date: 2020-04-26
Deaths: 496
Country: ESP","Date: 2020-04-27
Deaths: 503
Country: ESP","Date: 2020-04-28
Deaths: 509
Country: ESP","Date: 2020-04-29
Deaths: 519
Country: ESP","Date: 2020-04-30
Deaths: 524
Country: ESP","Date: 2020-05-01
Deaths: 524
Country: ESP","Date: 2020-05-02
Deaths: 536
Country: ESP","Date: 2020-05-03
Deaths: 540
Country: ESP","Date: 2020-05-04
Deaths: 543
Country: ESP","Date: 2020-05-05
Deaths: 547
Country: ESP","Date: 2020-05-06
Deaths: 553
Country: ESP","Date: 2020-05-07
Deaths: 557
Country: ESP","Date: 2020-05-08
Deaths: 562
Country: ESP","Date: 2020-05-09
Deaths: 566
Country: ESP","Date: 2020-05-10
Deaths: 569
Country: ESP","Date: 2020-05-11
Deaths: 571
Country: ESP","Date: 2020-05-12
Deaths: 575
Country: ESP","Date: 2020-05-13
Deaths: 579
Country: ESP","Date: 2020-05-14
Deaths: 584
Country: ESP","Date: 2020-05-15
Deaths: 587
Country: ESP","Date: 2020-05-16
Deaths: 589
Country: ESP","Date: 2020-05-17
Deaths: 589
Country: ESP","Date: 2020-05-18
Deaths: 592
Country: ESP","Date: 2020-05-19
Deaths: 594
Country: ESP","Date: 2020-05-20
Deaths: 596
Country: ESP","Date: 2020-05-21
Deaths: 597
Country: ESP","Date: 2020-05-22
Deaths: 612
Country: ESP","Date: 2020-05-23
Deaths: 613
Country: ESP","Date: 2020-05-24
Deaths: 614
Country: ESP"],"type":"scatter","mode":"lines","line":{"width":3.77952755905512,"color":"rgba(118,183,178,1)","dash":"solid"},"hoveron":"points","name":"ESP","legendgroup":"ESP","showlegend":true,"xaxis":"x","yaxis":"y","hoverinfo":"text","frame":null},{"x":[18322,18323,18324,18325,18326,18327,18328,18329,18330,18331,18332,18333,18334,18335,18336,18337,18338,18339,18340,18341,18342,18343,18344,18345,18346,18347,18348,18349,18350,18351,18352,18353,18354,18355,18356,18357,18358,18359,18360,18361,18362,18363,18364,18365,18366,18367,18368,18369,18370,18371,18372,18373,18374,18375,18376,18377,18378,18379,18380,18381,18382,18383,18384,18385,18386,18387,18388,18389,18390,18391,18392,18393,18394,18395,18396,18397,18398,18399,18400,18401,18402,18403,18404,18405,18406],"y":[0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,2,2,2,4,7,8,10,13,16,20,25,30,35,39,45,53,66,80,97,113,121,133,154,162,182,197,207,215,223,235,256,268,279,289,294,303,310,319,326,332,338,341,348,353,360,364,367,370,372,376,381,385,388,392,393,394,398,403,404,409,411,412,420,422,418,420,421,422,423,424],"text":["Date: 2020-03-01
Deaths: 0
Country: FRA","Date: 2020-03-02
Deaths: 0
Country: FRA","Date: 2020-03-03
Deaths: 0
Country: FRA","Date: 2020-03-04
Deaths: 0
Country: FRA","Date: 2020-03-05
Deaths: 0
Country: FRA","Date: 2020-03-06
Deaths: 0
Country: FRA","Date: 2020-03-07
Deaths: 0
Country: FRA","Date: 2020-03-08
Deaths: 0
Country: FRA","Date: 2020-03-09
Deaths: 0
Country: FRA","Date: 2020-03-10
Deaths: 0
Country: FRA","Date: 2020-03-11
Deaths: 1
Country: FRA","Date: 2020-03-12
Deaths: 1
Country: FRA","Date: 2020-03-13
Deaths: 1
Country: FRA","Date: 2020-03-14
Deaths: 1
Country: FRA","Date: 2020-03-15
Deaths: 1
Country: FRA","Date: 2020-03-16
Deaths: 2
Country: FRA","Date: 2020-03-17
Deaths: 2
Country: FRA","Date: 2020-03-18
Deaths: 2
Country: FRA","Date: 2020-03-19
Deaths: 4
Country: FRA","Date: 2020-03-20
Deaths: 7
Country: FRA","Date: 2020-03-21
Deaths: 8
Country: FRA","Date: 2020-03-22
Deaths: 10
Country: FRA","Date: 2020-03-23
Deaths: 13
Country: FRA","Date: 2020-03-24
Deaths: 16
Country: FRA","Date: 2020-03-25
Deaths: 20
Country: FRA","Date: 2020-03-26
Deaths: 25
Country: FRA","Date: 2020-03-27
Deaths: 30
Country: FRA","Date: 2020-03-28
Deaths: 35
Country: FRA","Date: 2020-03-29
Deaths: 39
Country: FRA","Date: 2020-03-30
Deaths: 45
Country: FRA","Date: 2020-03-31
Deaths: 53
Country: FRA","Date: 2020-04-01
Deaths: 66
Country: FRA","Date: 2020-04-02
Deaths: 80
Country: FRA","Date: 2020-04-03
Deaths: 97
Country: FRA","Date: 2020-04-04
Deaths: 113
Country: FRA","Date: 2020-04-05
Deaths: 121
Country: FRA","Date: 2020-04-06
Deaths: 133
Country: FRA","Date: 2020-04-07
Deaths: 154
Country: FRA","Date: 2020-04-08
Deaths: 162
Country: FRA","Date: 2020-04-09
Deaths: 182
Country: FRA","Date: 2020-04-10
Deaths: 197
Country: FRA","Date: 2020-04-11
Deaths: 207
Country: FRA","Date: 2020-04-12
Deaths: 215
Country: FRA","Date: 2020-04-13
Deaths: 223
Country: FRA","Date: 2020-04-14
Deaths: 235
Country: FRA","Date: 2020-04-15
Deaths: 256
Country: FRA","Date: 2020-04-16
Deaths: 268
Country: FRA","Date: 2020-04-17
Deaths: 279
Country: FRA","Date: 2020-04-18
Deaths: 289
Country: FRA","Date: 2020-04-19
Deaths: 294
Country: FRA","Date: 2020-04-20
Deaths: 303
Country: FRA","Date: 2020-04-21
Deaths: 310
Country: FRA","Date: 2020-04-22
Deaths: 319
Country: FRA","Date: 2020-04-23
Deaths: 326
Country: FRA","Date: 2020-04-24
Deaths: 332
Country: FRA","Date: 2020-04-25
Deaths: 338
Country: FRA","Date: 2020-04-26
Deaths: 341
Country: FRA","Date: 2020-04-27
Deaths: 348
Country: FRA","Date: 2020-04-28
Deaths: 353
Country: FRA","Date: 2020-04-29
Deaths: 360
Country: FRA","Date: 2020-04-30
Deaths: 364
Country: FRA","Date: 2020-05-01
Deaths: 367
Country: FRA","Date: 2020-05-02
Deaths: 370
Country: FRA","Date: 2020-05-03
Deaths: 372
Country: FRA","Date: 2020-05-04
Deaths: 376
Country: FRA","Date: 2020-05-05
Deaths: 381
Country: FRA","Date: 2020-05-06
Deaths: 385
Country: FRA","Date: 2020-05-07
Deaths: 388
Country: FRA","Date: 2020-05-08
Deaths: 392
Country: FRA","Date: 2020-05-09
Deaths: 393
Country: FRA","Date: 2020-05-10
Deaths: 394
Country: FRA","Date: 2020-05-11
Deaths: 398
Country: FRA","Date: 2020-05-12
Deaths: 403
Country: FRA","Date: 2020-05-13
Deaths: 404
Country: FRA","Date: 2020-05-14
Deaths: 409
Country: FRA","Date: 2020-05-15
Deaths: 411
Country: FRA","Date: 2020-05-16
Deaths: 412
Country: FRA","Date: 2020-05-17
Deaths: 420
Country: FRA","Date: 2020-05-18
Deaths: 422
Country: FRA","Date: 2020-05-19
Deaths: 418
Country: FRA","Date: 2020-05-20
Deaths: 420
Country: FRA","Date: 2020-05-21
Deaths: 421
Country: FRA","Date: 2020-05-22
Deaths: 422
Country: FRA","Date: 2020-05-23
Deaths: 423
Country: FRA","Date: 2020-05-24
Deaths: 424
Country: FRA"],"type":"scatter","mode":"lines","line":{"width":3.77952755905512,"color":"rgba(89,161,79,1)","dash":"solid"},"hoveron":"points","name":"FRA","legendgroup":"FRA","showlegend":true,"xaxis":"x","yaxis":"y","hoverinfo":"text","frame":null},{"x":[18322,18323,18324,18325,18326,18327,18328,18329,18330,18331,18332,18333,18334,18335,18336,18337,18338,18339,18340,18341,18342,18343,18344,18345,18346,18347,18348,18349,18350,18351,18352,18353,18354,18355,18356,18357,18358,18359,18360,18361,18362,18363,18364,18365,18366,18367,18368,18369,18370,18371,18372,18373,18374,18375,18376,18377,18378,18379,18380,18381,18382,18383,18384,18385,18386,18387,18388,18389,18390,18391,18392,18393,18394,18395,18396,18397,18398,18399,18400,18401,18402,18403,18404,18405,18406],"y":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,2,2,3,4,4,5,8,10,13,17,22,25,31,36,47,56,67,79,88,97,112,128,145,162,175,185,196,212,224,240,254,271,278,287,304,317,328,343,356,362,367,381,393,403,414,423,428,432,443,453,461,470,475,479,482,492,499,506,512,519,521,524,532,537,542,548,552,554],"text":["Date: 2020-03-01
Deaths: 0
Country: GBR","Date: 2020-03-02
Deaths: 0
Country: GBR","Date: 2020-03-03
Deaths: 0
Country: GBR","Date: 2020-03-04
Deaths: 0
Country: GBR","Date: 2020-03-05
Deaths: 0
Country: GBR","Date: 2020-03-06
Deaths: 0
Country: GBR","Date: 2020-03-07
Deaths: 0
Country: GBR","Date: 2020-03-08
Deaths: 0
Country: GBR","Date: 2020-03-09
Deaths: 0
Country: GBR","Date: 2020-03-10
Deaths: 0
Country: GBR","Date: 2020-03-11
Deaths: 0
Country: GBR","Date: 2020-03-12
Deaths: 0
Country: GBR","Date: 2020-03-13
Deaths: 0
Country: GBR","Date: 2020-03-14
Deaths: 0
Country: GBR","Date: 2020-03-15
Deaths: 1
Country: GBR","Date: 2020-03-16
Deaths: 1
Country: GBR","Date: 2020-03-17
Deaths: 1
Country: GBR","Date: 2020-03-18
Deaths: 2
Country: GBR","Date: 2020-03-19
Deaths: 2
Country: GBR","Date: 2020-03-20
Deaths: 3
Country: GBR","Date: 2020-03-21
Deaths: 4
Country: GBR","Date: 2020-03-22
Deaths: 4
Country: GBR","Date: 2020-03-23
Deaths: 5
Country: GBR","Date: 2020-03-24
Deaths: 8
Country: GBR","Date: 2020-03-25
Deaths: 10
Country: GBR","Date: 2020-03-26
Deaths: 13
Country: GBR","Date: 2020-03-27
Deaths: 17
Country: GBR","Date: 2020-03-28
Deaths: 22
Country: GBR","Date: 2020-03-29
Deaths: 25
Country: GBR","Date: 2020-03-30
Deaths: 31
Country: GBR","Date: 2020-03-31
Deaths: 36
Country: GBR","Date: 2020-04-01
Deaths: 47
Country: GBR","Date: 2020-04-02
Deaths: 56
Country: GBR","Date: 2020-04-03
Deaths: 67
Country: GBR","Date: 2020-04-04
Deaths: 79
Country: GBR","Date: 2020-04-05
Deaths: 88
Country: GBR","Date: 2020-04-06
Deaths: 97
Country: GBR","Date: 2020-04-07
Deaths: 112
Country: GBR","Date: 2020-04-08
Deaths: 128
Country: GBR","Date: 2020-04-09
Deaths: 145
Country: GBR","Date: 2020-04-10
Deaths: 162
Country: GBR","Date: 2020-04-11
Deaths: 175
Country: GBR","Date: 2020-04-12
Deaths: 185
Country: GBR","Date: 2020-04-13
Deaths: 196
Country: GBR","Date: 2020-04-14
Deaths: 212
Country: GBR","Date: 2020-04-15
Deaths: 224
Country: GBR","Date: 2020-04-16
Deaths: 240
Country: GBR","Date: 2020-04-17
Deaths: 254
Country: GBR","Date: 2020-04-18
Deaths: 271
Country: GBR","Date: 2020-04-19
Deaths: 278
Country: GBR","Date: 2020-04-20
Deaths: 287
Country: GBR","Date: 2020-04-21
Deaths: 304
Country: GBR","Date: 2020-04-22
Deaths: 317
Country: GBR","Date: 2020-04-23
Deaths: 328
Country: GBR","Date: 2020-04-24
Deaths: 343
Country: GBR","Date: 2020-04-25
Deaths: 356
Country: GBR","Date: 2020-04-26
Deaths: 362
Country: GBR","Date: 2020-04-27
Deaths: 367
Country: GBR","Date: 2020-04-28
Deaths: 381
Country: GBR","Date: 2020-04-29
Deaths: 393
Country: GBR","Date: 2020-04-30
Deaths: 403
Country: GBR","Date: 2020-05-01
Deaths: 414
Country: GBR","Date: 2020-05-02
Deaths: 423
Country: GBR","Date: 2020-05-03
Deaths: 428
Country: GBR","Date: 2020-05-04
Deaths: 432
Country: GBR","Date: 2020-05-05
Deaths: 443
Country: GBR","Date: 2020-05-06
Deaths: 453
Country: GBR","Date: 2020-05-07
Deaths: 461
Country: GBR","Date: 2020-05-08
Deaths: 470
Country: GBR","Date: 2020-05-09
Deaths: 475
Country: GBR","Date: 2020-05-10
Deaths: 479
Country: GBR","Date: 2020-05-11
Deaths: 482
Country: GBR","Date: 2020-05-12
Deaths: 492
Country: GBR","Date: 2020-05-13
Deaths: 499
Country: GBR","Date: 2020-05-14
Deaths: 506
Country: GBR","Date: 2020-05-15
Deaths: 512
Country: GBR","Date: 2020-05-16
Deaths: 519
Country: GBR","Date: 2020-05-17
Deaths: 521
Country: GBR","Date: 2020-05-18
Deaths: 524
Country: GBR","Date: 2020-05-19
Deaths: 532
Country: GBR","Date: 2020-05-20
Deaths: 537
Country: GBR","Date: 2020-05-21
Deaths: 542
Country: GBR","Date: 2020-05-22
Deaths: 548
Country: GBR","Date: 2020-05-23
Deaths: 552
Country: GBR","Date: 2020-05-24
Deaths: 554
Country: GBR"],"type":"scatter","mode":"lines","line":{"width":3.77952755905512,"color":"rgba(237,201,72,1)","dash":"solid"},"hoveron":"points","name":"GBR","legendgroup":"GBR","showlegend":true,"xaxis":"x","yaxis":"y","hoverinfo":"text","frame":null},{"x":[18322,18323,18324,18325,18326,18327,18328,18329,18330,18331,18332,18333,18334,18335,18336,18337,18338,18339,18340,18341,18342,18343,18344,18345,18346,18347,18348,18349,18350,18351,18352,18353,18354,18355,18356,18357,18358,18359,18360,18361,18362,18363,18364,18365,18366,18367,18368,18369,18370,18371,18372,18373,18374,18375,18376,18377,18378,18379,18380,18381,18382,18383,18384,18385,18386,18387,18388,18389,18390,18391,18392,18393,18394,18395,18396,18397,18398,18399,18400,18401,18402,18403,18404,18405,18406],"y":[1,1,1,2,2,3,4,6,8,10,14,17,21,24,30,36,41,49,56,67,80,91,101,113,124,135,151,166,178,192,206,218,230,243,254,263,273,283,292,303,312,322,329,339,349,358,367,376,384,392,399,408,415,423,430,437,441,446,453,458,463,467,475,478,481,485,491,496,500,503,506,509,512,515,519,523,526,528,530,532,535,538,540,542,543],"text":["Date: 2020-03-01
Deaths: 1
Country: ITA","Date: 2020-03-02
Deaths: 1
Country: ITA","Date: 2020-03-03
Deaths: 1
Country: ITA","Date: 2020-03-04
Deaths: 2
Country: ITA","Date: 2020-03-05
Deaths: 2
Country: ITA","Date: 2020-03-06
Deaths: 3
Country: ITA","Date: 2020-03-07
Deaths: 4
Country: ITA","Date: 2020-03-08
Deaths: 6
Country: ITA","Date: 2020-03-09
Deaths: 8
Country: ITA","Date: 2020-03-10
Deaths: 10
Country: ITA","Date: 2020-03-11
Deaths: 14
Country: ITA","Date: 2020-03-12
Deaths: 17
Country: ITA","Date: 2020-03-13
Deaths: 21
Country: ITA","Date: 2020-03-14
Deaths: 24
Country: ITA","Date: 2020-03-15
Deaths: 30
Country: ITA","Date: 2020-03-16
Deaths: 36
Country: ITA","Date: 2020-03-17
Deaths: 41
Country: ITA","Date: 2020-03-18
Deaths: 49
Country: ITA","Date: 2020-03-19
Deaths: 56
Country: ITA","Date: 2020-03-20
Deaths: 67
Country: ITA","Date: 2020-03-21
Deaths: 80
Country: ITA","Date: 2020-03-22
Deaths: 91
Country: ITA","Date: 2020-03-23
Deaths: 101
Country: ITA","Date: 2020-03-24
Deaths: 113
Country: ITA","Date: 2020-03-25
Deaths: 124
Country: ITA","Date: 2020-03-26
Deaths: 135
Country: ITA","Date: 2020-03-27
Deaths: 151
Country: ITA","Date: 2020-03-28
Deaths: 166
Country: ITA","Date: 2020-03-29
Deaths: 178
Country: ITA","Date: 2020-03-30
Deaths: 192
Country: ITA","Date: 2020-03-31
Deaths: 206
Country: ITA","Date: 2020-04-01
Deaths: 218
Country: ITA","Date: 2020-04-02
Deaths: 230
Country: ITA","Date: 2020-04-03
Deaths: 243
Country: ITA","Date: 2020-04-04
Deaths: 254
Country: ITA","Date: 2020-04-05
Deaths: 263
Country: ITA","Date: 2020-04-06
Deaths: 273
Country: ITA","Date: 2020-04-07
Deaths: 283
Country: ITA","Date: 2020-04-08
Deaths: 292
Country: ITA","Date: 2020-04-09
Deaths: 303
Country: ITA","Date: 2020-04-10
Deaths: 312
Country: ITA","Date: 2020-04-11
Deaths: 322
Country: ITA","Date: 2020-04-12
Deaths: 329
Country: ITA","Date: 2020-04-13
Deaths: 339
Country: ITA","Date: 2020-04-14
Deaths: 349
Country: ITA","Date: 2020-04-15
Deaths: 358
Country: ITA","Date: 2020-04-16
Deaths: 367
Country: ITA","Date: 2020-04-17
Deaths: 376
Country: ITA","Date: 2020-04-18
Deaths: 384
Country: ITA","Date: 2020-04-19
Deaths: 392
Country: ITA","Date: 2020-04-20
Deaths: 399
Country: ITA","Date: 2020-04-21
Deaths: 408
Country: ITA","Date: 2020-04-22
Deaths: 415
Country: ITA","Date: 2020-04-23
Deaths: 423
Country: ITA","Date: 2020-04-24
Deaths: 430
Country: ITA","Date: 2020-04-25
Deaths: 437
Country: ITA","Date: 2020-04-26
Deaths: 441
Country: ITA","Date: 2020-04-27
Deaths: 446
Country: ITA","Date: 2020-04-28
Deaths: 453
Country: ITA","Date: 2020-04-29
Deaths: 458
Country: ITA","Date: 2020-04-30
Deaths: 463
Country: ITA","Date: 2020-05-01
Deaths: 467
Country: ITA","Date: 2020-05-02
Deaths: 475
Country: ITA","Date: 2020-05-03
Deaths: 478
Country: ITA","Date: 2020-05-04
Deaths: 481
Country: ITA","Date: 2020-05-05
Deaths: 485
Country: ITA","Date: 2020-05-06
Deaths: 491
Country: ITA","Date: 2020-05-07
Deaths: 496
Country: ITA","Date: 2020-05-08
Deaths: 500
Country: ITA","Date: 2020-05-09
Deaths: 503
Country: ITA","Date: 2020-05-10
Deaths: 506
Country: ITA","Date: 2020-05-11
Deaths: 509
Country: ITA","Date: 2020-05-12
Deaths: 512
Country: ITA","Date: 2020-05-13
Deaths: 515
Country: ITA","Date: 2020-05-14
Deaths: 519
Country: ITA","Date: 2020-05-15
Deaths: 523
Country: ITA","Date: 2020-05-16
Deaths: 526
Country: ITA","Date: 2020-05-17
Deaths: 528
Country: ITA","Date: 2020-05-18
Deaths: 530
Country: ITA","Date: 2020-05-19
Deaths: 532
Country: ITA","Date: 2020-05-20
Deaths: 535
Country: ITA","Date: 2020-05-21
Deaths: 538
Country: ITA","Date: 2020-05-22
Deaths: 540
Country: ITA","Date: 2020-05-23
Deaths: 542
Country: ITA","Date: 2020-05-24
Deaths: 543
Country: ITA"],"type":"scatter","mode":"lines","line":{"width":3.77952755905512,"color":"rgba(176,122,161,1)","dash":"solid"},"hoveron":"points","name":"ITA","legendgroup":"ITA","showlegend":true,"xaxis":"x","yaxis":"y","hoverinfo":"text","frame":null},{"x":[18322,18323,18324,18325,18326,18327,18328,18329,18330,18331,18332,18333,18334,18335,18336,18337,18338,18339,18340,18341,18342,18343,18344,18345,18346,18347,18348,18349,18350,18351,18352,18353,18354,18355,18356,18357,18358,18359,18360,18361,18362,18363,18364,18365,18366,18367,18368,18369,18370,18371,18372,18373,18374,18375,18376,18377,18378,18379,18380,18381,18382,18383,18384,18385,18386,18387,18388,18389,18390,18391,18392,18393,18394,18395,18396,18397,18398,18399,18400,18401,18402,18403,18404,18405,18406],"y":[0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,2,3,4,6,8,10,12,16,21,25,32,37,45,50,60,68,78,86,96,102,108,122,130,139,146,153,159,164,171,182,192,201,209,214,218,227,235,242,249,256,260,262,265,273,278,284,289,293,295,300,302,307,311,315,316,317,320,323,324,327,329,330,330,332,334,335,336,337,338],"text":["Date: 2020-03-01
Deaths: 0
Country: NLD","Date: 2020-03-02
Deaths: 0
Country: NLD","Date: 2020-03-03
Deaths: 0
Country: NLD","Date: 2020-03-04
Deaths: 0
Country: NLD","Date: 2020-03-05
Deaths: 0
Country: NLD","Date: 2020-03-06
Deaths: 0
Country: NLD","Date: 2020-03-07
Deaths: 0
Country: NLD","Date: 2020-03-08
Deaths: 0
Country: NLD","Date: 2020-03-09
Deaths: 0
Country: NLD","Date: 2020-03-10
Deaths: 0
Country: NLD","Date: 2020-03-11
Deaths: 0
Country: NLD","Date: 2020-03-12
Deaths: 0
Country: NLD","Date: 2020-03-13
Deaths: 1
Country: NLD","Date: 2020-03-14
Deaths: 1
Country: NLD","Date: 2020-03-15
Deaths: 1
Country: NLD","Date: 2020-03-16
Deaths: 1
Country: NLD","Date: 2020-03-17
Deaths: 2
Country: NLD","Date: 2020-03-18
Deaths: 3
Country: NLD","Date: 2020-03-19
Deaths: 4
Country: NLD","Date: 2020-03-20
Deaths: 6
Country: NLD","Date: 2020-03-21
Deaths: 8
Country: NLD","Date: 2020-03-22
Deaths: 10
Country: NLD","Date: 2020-03-23
Deaths: 12
Country: NLD","Date: 2020-03-24
Deaths: 16
Country: NLD","Date: 2020-03-25
Deaths: 21
Country: NLD","Date: 2020-03-26
Deaths: 25
Country: NLD","Date: 2020-03-27
Deaths: 32
Country: NLD","Date: 2020-03-28
Deaths: 37
Country: NLD","Date: 2020-03-29
Deaths: 45
Country: NLD","Date: 2020-03-30
Deaths: 50
Country: NLD","Date: 2020-03-31
Deaths: 60
Country: NLD","Date: 2020-04-01
Deaths: 68
Country: NLD","Date: 2020-04-02
Deaths: 78
Country: NLD","Date: 2020-04-03
Deaths: 86
Country: NLD","Date: 2020-04-04
Deaths: 96
Country: NLD","Date: 2020-04-05
Deaths: 102
Country: NLD","Date: 2020-04-06
Deaths: 108
Country: NLD","Date: 2020-04-07
Deaths: 122
Country: NLD","Date: 2020-04-08
Deaths: 130
Country: NLD","Date: 2020-04-09
Deaths: 139
Country: NLD","Date: 2020-04-10
Deaths: 146
Country: NLD","Date: 2020-04-11
Deaths: 153
Country: NLD","Date: 2020-04-12
Deaths: 159
Country: NLD","Date: 2020-04-13
Deaths: 164
Country: NLD","Date: 2020-04-14
Deaths: 171
Country: NLD","Date: 2020-04-15
Deaths: 182
Country: NLD","Date: 2020-04-16
Deaths: 192
Country: NLD","Date: 2020-04-17
Deaths: 201
Country: NLD","Date: 2020-04-18
Deaths: 209
Country: NLD","Date: 2020-04-19
Deaths: 214
Country: NLD","Date: 2020-04-20
Deaths: 218
Country: NLD","Date: 2020-04-21
Deaths: 227
Country: NLD","Date: 2020-04-22
Deaths: 235
Country: NLD","Date: 2020-04-23
Deaths: 242
Country: NLD","Date: 2020-04-24
Deaths: 249
Country: NLD","Date: 2020-04-25
Deaths: 256
Country: NLD","Date: 2020-04-26
Deaths: 260
Country: NLD","Date: 2020-04-27
Deaths: 262
Country: NLD","Date: 2020-04-28
Deaths: 265
Country: NLD","Date: 2020-04-29
Deaths: 273
Country: NLD","Date: 2020-04-30
Deaths: 278
Country: NLD","Date: 2020-05-01
Deaths: 284
Country: NLD","Date: 2020-05-02
Deaths: 289
Country: NLD","Date: 2020-05-03
Deaths: 293
Country: NLD","Date: 2020-05-04
Deaths: 295
Country: NLD","Date: 2020-05-05
Deaths: 300
Country: NLD","Date: 2020-05-06
Deaths: 302
Country: NLD","Date: 2020-05-07
Deaths: 307
Country: NLD","Date: 2020-05-08
Deaths: 311
Country: NLD","Date: 2020-05-09
Deaths: 315
Country: NLD","Date: 2020-05-10
Deaths: 316
Country: NLD","Date: 2020-05-11
Deaths: 317
Country: NLD","Date: 2020-05-12
Deaths: 320
Country: NLD","Date: 2020-05-13
Deaths: 323
Country: NLD","Date: 2020-05-14
Deaths: 324
Country: NLD","Date: 2020-05-15
Deaths: 327
Country: NLD","Date: 2020-05-16
Deaths: 329
Country: NLD","Date: 2020-05-17
Deaths: 330
Country: NLD","Date: 2020-05-18
Deaths: 330
Country: NLD","Date: 2020-05-19
Deaths: 332
Country: NLD","Date: 2020-05-20
Deaths: 334
Country: NLD","Date: 2020-05-21
Deaths: 335
Country: NLD","Date: 2020-05-22
Deaths: 336
Country: NLD","Date: 2020-05-23
Deaths: 337
Country: NLD","Date: 2020-05-24
Deaths: 338
Country: NLD"],"type":"scatter","mode":"lines","line":{"width":3.77952755905512,"color":"rgba(255,157,167,1)","dash":"solid"},"hoveron":"points","name":"NLD","legendgroup":"NLD","showlegend":true,"xaxis":"x","yaxis":"y","hoverinfo":"text","frame":null},{"x":[18322,18323,18324,18325,18326,18327,18328,18329,18330,18331,18332,18333,18334,18335,18336,18337,18338,18339,18340,18341,18342,18343,18344,18345,18346,18347,18348,18349,18350,18351,18352,18353,18354,18355,18356,18357,18358,18359,18360,18361,18362,18363,18364,18365,18366,18367,18368,18369,18370,18371,18372,18373,18374,18375,18376,18377,18378,18379,18380,18381,18382,18383,18384,18385,18386,18387,18388,18389,18390,18391,18392,18393,18394,18395,18396,18397,18398,18399,18400,18401,18402,18403,18404,18405,18406],"y":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,2,3,4,5,6,8,10,13,16,20,23,28,33,38,45,52,59,68,77,85,96,105,114,124,133,142,151,162,173,181,189,198,206,212,220,228,237,244,252,259,267,275,282,290,297,304,313,319,326,334,340,347,354,360,365,370,375,380,384,389,394,397,400,403,406,409,410],"text":["Date: 2020-03-01
Deaths: 0
Country: SWE","Date: 2020-03-02
Deaths: 0
Country: SWE","Date: 2020-03-03
Deaths: 0
Country: SWE","Date: 2020-03-04
Deaths: 0
Country: SWE","Date: 2020-03-05
Deaths: 0
Country: SWE","Date: 2020-03-06
Deaths: 0
Country: SWE","Date: 2020-03-07
Deaths: 0
Country: SWE","Date: 2020-03-08
Deaths: 0
Country: SWE","Date: 2020-03-09
Deaths: 0
Country: SWE","Date: 2020-03-10
Deaths: 0
Country: SWE","Date: 2020-03-11
Deaths: 0
Country: SWE","Date: 2020-03-12
Deaths: 0
Country: SWE","Date: 2020-03-13
Deaths: 0
Country: SWE","Date: 2020-03-14
Deaths: 0
Country: SWE","Date: 2020-03-15
Deaths: 0
Country: SWE","Date: 2020-03-16
Deaths: 1
Country: SWE","Date: 2020-03-17
Deaths: 1
Country: SWE","Date: 2020-03-18
Deaths: 1
Country: SWE","Date: 2020-03-19
Deaths: 2
Country: SWE","Date: 2020-03-20
Deaths: 3
Country: SWE","Date: 2020-03-21
Deaths: 4
Country: SWE","Date: 2020-03-22
Deaths: 5
Country: SWE","Date: 2020-03-23
Deaths: 6
Country: SWE","Date: 2020-03-24
Deaths: 8
Country: SWE","Date: 2020-03-25
Deaths: 10
Country: SWE","Date: 2020-03-26
Deaths: 13
Country: SWE","Date: 2020-03-27
Deaths: 16
Country: SWE","Date: 2020-03-28
Deaths: 20
Country: SWE","Date: 2020-03-29
Deaths: 23
Country: SWE","Date: 2020-03-30
Deaths: 28
Country: SWE","Date: 2020-03-31
Deaths: 33
Country: SWE","Date: 2020-04-01
Deaths: 38
Country: SWE","Date: 2020-04-02
Deaths: 45
Country: SWE","Date: 2020-04-03
Deaths: 52
Country: SWE","Date: 2020-04-04
Deaths: 59
Country: SWE","Date: 2020-04-05
Deaths: 68
Country: SWE","Date: 2020-04-06
Deaths: 77
Country: SWE","Date: 2020-04-07
Deaths: 85
Country: SWE","Date: 2020-04-08
Deaths: 96
Country: SWE","Date: 2020-04-09
Deaths: 105
Country: SWE","Date: 2020-04-10
Deaths: 114
Country: SWE","Date: 2020-04-11
Deaths: 124
Country: SWE","Date: 2020-04-12
Deaths: 133
Country: SWE","Date: 2020-04-13
Deaths: 142
Country: SWE","Date: 2020-04-14
Deaths: 151
Country: SWE","Date: 2020-04-15
Deaths: 162
Country: SWE","Date: 2020-04-16
Deaths: 173
Country: SWE","Date: 2020-04-17
Deaths: 181
Country: SWE","Date: 2020-04-18
Deaths: 189
Country: SWE","Date: 2020-04-19
Deaths: 198
Country: SWE","Date: 2020-04-20
Deaths: 206
Country: SWE","Date: 2020-04-21
Deaths: 212
Country: SWE","Date: 2020-04-22
Deaths: 220
Country: SWE","Date: 2020-04-23
Deaths: 228
Country: SWE","Date: 2020-04-24
Deaths: 237
Country: SWE","Date: 2020-04-25
Deaths: 244
Country: SWE","Date: 2020-04-26
Deaths: 252
Country: SWE","Date: 2020-04-27
Deaths: 259
Country: SWE","Date: 2020-04-28
Deaths: 267
Country: SWE","Date: 2020-04-29
Deaths: 275
Country: SWE","Date: 2020-04-30
Deaths: 282
Country: SWE","Date: 2020-05-01
Deaths: 290
Country: SWE","Date: 2020-05-02
Deaths: 297
Country: SWE","Date: 2020-05-03
Deaths: 304
Country: SWE","Date: 2020-05-04
Deaths: 313
Country: SWE","Date: 2020-05-05
Deaths: 319
Country: SWE","Date: 2020-05-06
Deaths: 326
Country: SWE","Date: 2020-05-07
Deaths: 334
Country: SWE","Date: 2020-05-08
Deaths: 340
Country: SWE","Date: 2020-05-09
Deaths: 347
Country: SWE","Date: 2020-05-10
Deaths: 354
Country: SWE","Date: 2020-05-11
Deaths: 360
Country: SWE","Date: 2020-05-12
Deaths: 365
Country: SWE","Date: 2020-05-13
Deaths: 370
Country: SWE","Date: 2020-05-14
Deaths: 375
Country: SWE","Date: 2020-05-15
Deaths: 380
Country: SWE","Date: 2020-05-16
Deaths: 384
Country: SWE","Date: 2020-05-17
Deaths: 389
Country: SWE","Date: 2020-05-18
Deaths: 394
Country: SWE","Date: 2020-05-19
Deaths: 397
Country: SWE","Date: 2020-05-20
Deaths: 400
Country: SWE","Date: 2020-05-21
Deaths: 403
Country: SWE","Date: 2020-05-22
Deaths: 406
Country: SWE","Date: 2020-05-23
Deaths: 409
Country: SWE","Date: 2020-05-24
Deaths: 410
Country: SWE"],"type":"scatter","mode":"lines","line":{"width":3.77952755905512,"color":"rgba(156,117,95,1)","dash":"solid"},"hoveron":"points","name":"SWE","legendgroup":"SWE","showlegend":true,"xaxis":"x","yaxis":"y","hoverinfo":"text","frame":null},{"x":[18322,18323,18324,18325,18326,18327,18328,18329,18330,18331,18332,18333,18334,18335,18336,18337,18338,18339,18340,18341,18342,18343,18344,18345,18346,18347,18348,18349,18350,18351,18352,18353,18354,18355,18356,18357,18358,18359,18360,18361,18362,18363,18364,18365,18366,18367,18368,18369,18370,18371,18372,18373,18374,18375,18376,18377,18378,18379,18380,18381,18382,18383,18384,18385,18386,18387,18388,18389,18390,18391,18392,18393,18394,18395,18396,18397,18398,18399,18400,18401,18402,18403,18404,18405,18406],"y":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,2,2,3,4,5,7,8,10,13,16,20,24,28,33,38,43,50,56,62,68,75,80,85,93,100,107,115,122,125,131,138,145,152,158,165,168,172,179,187,193,199,203,207,211,218,225,232,236,241,243,247,252,257,263,268,272,274,277,281,286,290,294,297,299],"text":["Date: 2020-03-01
Deaths: 0
Country: USA","Date: 2020-03-02
Deaths: 0
Country: USA","Date: 2020-03-03
Deaths: 0
Country: USA","Date: 2020-03-04
Deaths: 0
Country: USA","Date: 2020-03-05
Deaths: 0
Country: USA","Date: 2020-03-06
Deaths: 0
Country: USA","Date: 2020-03-07
Deaths: 0
Country: USA","Date: 2020-03-08
Deaths: 0
Country: USA","Date: 2020-03-09
Deaths: 0
Country: USA","Date: 2020-03-10
Deaths: 0
Country: USA","Date: 2020-03-11
Deaths: 0
Country: USA","Date: 2020-03-12
Deaths: 0
Country: USA","Date: 2020-03-13
Deaths: 0
Country: USA","Date: 2020-03-14
Deaths: 0
Country: USA","Date: 2020-03-15
Deaths: 0
Country: USA","Date: 2020-03-16
Deaths: 0
Country: USA","Date: 2020-03-17
Deaths: 0
Country: USA","Date: 2020-03-18
Deaths: 1
Country: USA","Date: 2020-03-19
Deaths: 1
Country: USA","Date: 2020-03-20
Deaths: 1
Country: USA","Date: 2020-03-21
Deaths: 1
Country: USA","Date: 2020-03-22
Deaths: 2
Country: USA","Date: 2020-03-...

To leave a comment for the author, please follow the link and comment on their blog: R | TypeThePipe. R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job. Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Pages