Goals and introduction

The purpose of this vignette is to guide new users through data acquisition and analysis, as well as to demonstrate the basic capabilities of the divDyn R package. This tutorial will use Scleractininan corals as an example group which was selected selected because the due to data quality reasons and as we have readily available additional information about the coral taxa with which to customize the analyses.

Occurrence data download

To run analyses, first the occurrence data have to be downloaded and processed. You can download all PBDB occurrences using the chronosphere R package. The chronosphere is an initiative we have started at GeoZentrum Nordbayern in Erlangen, to facilitate the use, tracability, portability and version controlling of data in scientific analyses.

The package is available from the CRAN repositories and can be installed with the following line of code (you will probably have to select your preferred download mirror).

install.packages("chronosphere")

After the package is installed, it is ready for use, you can attach it with:

library(chronosphere)
dat <- fetch("pbdb", ver="20220510")
## If you use the data in publications, please cite its
## reference(s), as well as that of the 'chronosphere' package.
## Please remember to acknowledge the Paleobiology Database (http://paleobiodb.org) in your publication.

This is a pre-processed version of the PBDB that was downloaded with this API call:

attributes(dat)$chronosphere$API
## [1] "https://paleobiodb.org/data1.2/occs/list.csv?datainfo&rowcount&interval=Ediacaran,Holocene&show=class,classext,genus,subgenus,coll,coords,loc,paleoloc,strat,stratext,lith,env,ref,crmod,timebins,timecompare,refattr,entname,attr,ident,img,plant,abund,ecospace,taphonomy,etbasis,pres,prot,lithext,geo,methods,resgroup,refattr,ent"

You can copy and paste this URL to your browser and the PBDB would compile the current corresponding dataset for you. It normally takes about 15-30 minutes to get the dataset, another couple of minutes to download the .csv, which you still have to read in using read.table().

The PBDB API allows much more flexible download, but this denormalized version of the PBDB is enough for the overwhelming majority of the analyses. There are some cases, when the data you are interested in are not in this subset and you have to use the API yourself.

Data processing

Installing and loading divDyn

To process the stratigraphic information in the downloaded occurrence file, and do the analyses, you have to to install and attach the divDyn package.

The package is available from the CRAN repositories and can be installed with the following line of code (you will probably have to select your preferred download mirror).

install.packages("divDyn")

After the package is installed, it is ready for use, you can attach it with:

library(divDyn)

Initial filtering

The downloaded dataset needs to be subsetted to represent the order of scleractinian corals.

dat <- dat[dat$order=="Scleractinia",]

The analyses will be conducted at the level of genera.

The entries that have no valid genus names in the PaleoDB can be omitted at this stage. They will not provide us any information about the evolution of the group.

# omit non-genus entries
dat<- dat[!dat$genus=="", ]

The dataset should be cleaned as much as possible at this stage, for didactic purposes and for keeping things simple, this step is skipped in this tutorial.

Stratigraphic resolution

Every colleciton in the the PaleoDB has a user entered 'early_interval' and 'late_interval' field. These are fields are used to place the collection in stratigraphy/geological time. The 'early_interval' collection is mandatory for every entry while the 'late_interval' is only necessary if the taxon does not fit in the interval specified as 'early_interval'. Otherwise the 'late_interval' field is blank (empty quotes: "").

The divDyn package uses two frequently used, discrete time scales that we use as the finest stratiigraphic resolution of our analyses: the stage level time scale (stages, 95 bins) and the ten million year time scale (tens, 49 bins). These can be attached with the data() function:

data(stages)
data(tens)

These will be useful for plotting as they contain the age estimates for the intervals, but they are not too useful for finding which interval the 'early_interval' and 'late_interval' fields point to. This can be found in the keys object.

data(keys)

This list is designed to be used with the categorize() function of the package, that combines the large number of entries to and replaces them with the name of the group they belong to. This has to be run on both the 'early_interval' and the 'late_interval' fields.

# the 'stg' entries (lookup)
stgMin <- categorize(dat[ ,"early_interval"], keys$stgInt)
stgMax <- categorize(dat[ ,"late_interval"], keys$stgInt)

The output of this function is either an interval number that corresponds to the stages timescale, -1, indicating that the value was empty quotes, or NA, if the interval is either not in the lookup table or is too broad.

Because this scheme is designed for general use, the output of this function is a vector of character values. These have to be demoted to numeric values, otherwise their order would not be correct.

# convert to numeric
  stgMin <- as.numeric(stgMin)
  stgMax <- as.numeric(stgMax)

Now these two vectors have to be combined. There are more solutions to solve the problem of stratigraphic uncertainty, but for the sake of simplicity, let’s just omit every occurrence that cannot be assigned to a single stage in the stages timescale. For that, we will create an empty container, check whether an the interval entry for the occurrence satisfies the condition above, and if so, save the interval ID.

# empty container
dat$stg <- rep(NA, nrow(dat))

# select entries, where
stgCondition <- c(
# the early and late interval fields indicate the same stg
  which(stgMax==stgMin),
# or the late_intervar field is empty
  which(stgMax==-1))

# in these entries, use the stg indicated by the early_interval
  dat$stg[stgCondition] <- stgMin[stgCondition] 

Now every occurrence entry either has a single integer number in the stg column, or NA as its interval identifier. Note that the code above cannot be used for Cambrian and Ordovician occurrences due to stratigraphic problems. If you are interested in assigning these to stage-level intervals, we recommend to take a look at the vignette we wrote that describes global marine, Phanerozoic scale analyses:

https://github.com/divDyn/ddPhanero/blob/master/doc/1.0.1/dd_phanero.pdf

The assignments can be repeated for the 10-million year bins with the following commands:

# a. categorize interval names to bin numbers
  # categorize is the new function of the package
  binMin<-categorize(dat[,"early_interval"],keys$binInt)
  binMax<-categorize(dat[,"late_interval"],keys$binInt)
  # convert to numeric
  binMin<-as.numeric(binMin)
  binMax<-as.numeric(binMax)
# b. resolve max-min interval uncertainty
# final variable (empty)
  dat$bin <- rep(NA, nrow(dat))
# use entries, where
  binCondition <- c(
  # the early and late interval fields indicate the same bin
    which(binMax==binMin),
  # or the late_interval field is empty
    which(binMax==-1))
# in these entries, use the bin indicated by the early_interval
  dat$bin[binCondition] <- binMin[binCondition]

Sampling assessment

You can assess how well the stratigraphic resolution turned out by running the table() function on the interval-identifier vector:

table(dat$stg)
## 
##   49   54   55   56   57   58   59   60   61   62   63   64   65   66   67   68 
##    5  114   72  446  910  600  101   86  220   80   87  491  383  231 1953 1149 
##   69   70   71   72   73   74   75   76   77   78   79   80   81   82   83   84 
##  988  157  288  207  483 1025  389  529  124   71  158  294  745  377  311  235 
##   85   86   87   88   89   90   91   92   93   94   95 
##  225  458  656  519 1127 1818 1184 1463 1903 7825 1201

Summing it tells us how many of the overall occurrences we can assign to stratigraphic stages.

sum(table(dat$stg))
## [1] 31688
# which is a
sum(table(dat$stg))/nrow(dat)
## [1] 0.8528826
# proportion of the total data.

As we cannot use unresolved occurrences in any way, coding will be easier if we omit them at this stage.

# omit unbinned
dats <- dat[!is.na(dat$stg),]

If you take a look at the stage object, you can notice that besides the few entries in stages 37, 49 and 50, Scleractinian become important fossils in the Anisian stage.

# omit Paleozoic
dats <- dats[dats$stg>52,]

You can get a more organized view of sampling parameters by running the binstat() function in divDyn, that calculates the occurrence, collection, and reference counts in a single line. This is the general use of the high-level function of the package: you state the occurrence data.frame as the first, and then the column names as additional arguments. The column tax stands for the taxon names, bin for the discrete time bins, coll for collection identifiers and ref for reference identifiers.

bsFull <- binstat(dats, tax="genus", bin="stg", 
  coll="collection_no", ref="reference_no")
## The database contains duplicate occurrences (multiple species/genus).
bsFull$occs
##  [1]   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA
## [16]   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA
## [31]   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA
## [46]   NA   NA   NA   NA   NA   NA   NA   NA  114   72  446  910  600  101   86
## [61]  220   80   87  491  383  231 1953 1149  988  157  288  207  483 1025  389
## [76]  529  124   71  158  294  745  377  311  235  225  458  656  519 1127 1818
## [91] 1184 1463 1903 7825 1201

This output is organizes so that the index of the values in the vector match up the bin identifier (e.g. 60th value is for stg 60). The default setting of the function will output a message about duplicate occurrences. This warns us that there are collections with more than one genus entries in a collection (i.e. more than one species/genus). If you are interested in genus-level analyses, it is probably better to count these as one, which you can do with duplicates=FALSE option.

bs <- binstat(dats, tax="genus", bin="stg", 
  coll="collection_no", ref="reference_no", duplicates=FALSE)
bs$occs
##  [1]   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA
## [16]   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA
## [31]   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA   NA
## [46]   NA   NA   NA   NA   NA   NA   NA   NA   99   66  355  769  442   77   72
## [61]  201   77   74  411  291  206 1663  906  701  126  243  190  387  824  348
## [76]  448  115   63  146  224  632  341  271  217  192  407  508  410  908 1207
## [91]  954 1117 1474 6154  972

Plotting

Plotting these variables is probably better then just looking at the numbers. The package includes a powerful time-scale plotting function that lets you visualize the how time is broken down to discrete intervals. This highly-customizable function is built on the basic plot() function, and most of its arguments are inherited. The following function call will draw plot with in the past 250 million years, with series-level shading and system-level boxes at the bottom:

tsplot(stages, boxes="sys", boxes.col="systemCol", 
    shading="series", xlim=c(250, 0), ylim=c(0,2000))

The you can draw the number of occurrences with lines(). As the same row indices in the stages object and the result of binstat() indicate values that belong to the same interval, you do not need any subsetting to align the to data.frames.

tsplot(stages, boxes="sys", boxes.col="systemCol", 
  shading="series", xlim=c(250, 0), ylim=c(0,2000), ylab="Number occurrences")
lines(stages$mid, bs$occs)

We will mostly use the same basic function call for plotting, and typing all these arguments is time consuming. But we can spare some time, if we defined a wrapper function around the plotting that executes the plotting call the with the same arguments that we want to keep the same, while allowing us to change what we would like to change. This is fairly straightforward with the ... argument that allows you to pass arguments through function calls.

tp <- function(...) tsplot(stages, boxes="sys", boxes.col="systemCol", 
  shading="series", xlim=52:95, ...)

Now, every time you call the newly defined function tp() it will have the same colour, shading and x-interval, while you can change other arguments of the tsplot() function. For instance if you wanted see how the number of collections change over time, you can draw a similar plot as above, with:

tp(ylim=c(0,350), ylab="Number of collections") 
  lines(stages$mid, bs$colls)

The nicest feature of this function is that you do not have to use the built-in data, you can create your own timescale, as long as you use the same structure as in stages and specify the top and bottom boundaries of the intervals (top and bottom columns).

It is not surprising that occurrences and collection numbers have the same overall trajectory. As you can see, the sampling of corals is highly volatile over time, with a couple of marked peaks in the late Triassic, Late Jurassic, mid and Late Creteacoues and the Neogene, that we have to take into consideration in some form, when describing the evolution of the group.

Richness through time

Now that we know how sampling changed over time, we can calculate a lot of diversity, extinction and origination rate series from the dataset with the basic divDyn() function. This function requires an occurrence dataset with a taxon (tax) and a discrete time (bin) columns.

dd <- divDyn(dats, tax="genus", bin="stg")

The output of this function resembles that of the binstat() function. Variables are organized with their names. - Names starting with t- (e.g. t3) are taxon counts - Names starting with div- are diversity estimators. - Names starting with ori- and ext- are origination and extinction rate estimators, respectively. - Names starting with samp- are measures of sampling completeness. The abbreviations are resolved in the help file that can be accessed with

?divDyn

This file also includes all equations that are used to calculate the variables.

The most basic way to count richness over time is with range-through diversities (divRT). This is simply just the count of taxa that was supposed to be living in an interval (assuming presence if it was found before and after it, if it is not sampled). You can plot these with:

# simple diversity
  tp(ylim=c(0,300), ylab="richness")    
  lines(stages$mid, dd$divRT, lwd=2)

This is the metric that can be applied to Sepkoski’s compendium (2002). Note that diversity in the Cretaceous is comparable to those that corals demonstrated in the Neogene, with a slight decrease in the Cretaecous. The sharp drop in the last bin is also worthy of notion. This is an edge effect and is also partially present because the fossil information in the Holocene is usually worse in the PaleoDB. However, adding all recent taxa to the dataset would also create an artifact known as the Pull of the Recent. The complete preservation would spuriously increase range-based diversity estimates in the last few bins.

Subsampling

Ultimately, all time series of richness suffer from the heterogeneity of sampling in space and time, as well as a number of other factors. For instance, the temporal bias is clear, as there is a high correlation between sampling and richness through time.

cor.test(dd$divRT, bs$occs, method="spearman")
## Warning in cor.test.default(dd$divRT, bs$occs, method = "spearman"): Cannot
## compute exact p-value with ties
## 
##  Spearman's rank correlation rho
## 
## data:  dd$divRT and bs$occs
## S = 6167.7, p-value = 0.0007426
## alternative hypothesis: true rho is not equal to 0
## sample estimates:
##       rho 
## 0.5002229

Subsampling aims at estimating what values the focus variables would take, if sampling were uniform over time. This methodology was developed for richness estimation (in an analytical context first, i.e. algegra and equations and then with Monte Carlo simulations), but can be generalized to other variables. You are invited to check out this generalization in the package Handout that you can access with the

vignette("handout")

command.

Classical Rarefaction

As all intervals have sampling at least 50 occurrences, let’s start with rarefying every stage to 50 occurrences. You can accomplish this by running:

cr50 <- subsample(dats, tax="genus", bin="stg", 
  q=50, coll="collection_no", duplicates=FALSE, iter=300)

This function call will run the default Classical Rarefaction method on every interval 300 times, calculate the entire divDyn output matrix in every trial and averages the results. With specifying coll=collection_no and duplicates=FALSE, we instruct the function to treat the multiple occurrences of the same genus in a collection as one. It’s structure is the same as that of divDyn()’s, so you can access the variables the same way.

tp(ylim=c(0,100), ylab="Subsampled richness")   
lines(stages$mid, cr50$divRT, lwd=2)