Subscribe to R-sig-geo feed
This is an archive for R-sig-geo, not a forum. It is not possible to post through Nabble - you may not start a new thread nor follow up an existing thread. If you wish to post, but are not subscribed to the list through the list homepage, subscribe first (through the list homepage, not via Nabble) and post from your subscribed email address. Until 2015-06-20, subscribers could post through Nabble, but policy has been changed as too many non-subscribers misunderstood the interface.
Updated: 17 min 2 sec ago

Re: Execute Extract function on several Raster with points layer

Thu, 09/17/2020 - 14:20
Dear Gaëtan,
(cc r-sig-geo)
please post your mails in this topic to the mailing list.
I don't really know what does 'my "tmax_filesnames" object is a "large
character"' mean. tmax_filenames is a typical character vector of 3660
elements, so it should not cause any problem.
Anyway, the error message indicates that one or more of the file names
are not correct. You should carefully check whether tmax_filenames was
generated apropriately.
Best wishes,
Ákos

2020.09.17. 21:00 keltezéssel, Gaetan Martinelli írta:
> Hello again,
>
> The Stack function doesn't work because my "tmax_filesnames" object is
> a "large character".
> Here is the error message I received after this line on my script :
> > tmax_raster <- raster::stack(tmax_filenames)
> Error in .local(.Object, ...) :
> Error in .rasterObjectFromFile(x, band = band, objecttype =
> "RasterLayer",  :
>   Cannot create a RasterLayer object from this file. (file does not exist)
>
> How do I rectify this error?
> I transform my object Large Character?
>
> Thanks again Àkos.
>
> Gaëtan
>
> *Envoyé:* jeudi 17 septembre 2020 à 13:57
> *De:* "Bede-Fazekas Ákos" <[hidden email]>
> *À:* [hidden email]
> *Objet:* Re: [R-sig-Geo] Execute Extract function on several Raster
> with points layer
> Hello Gaëtan,
>
> so as far as I understand, you have 3 main folders:
> "Max_T", ? and ?
> and in alll the three folders, there are subfolders
> "1961", "1962", ... "1970"
> In each folder, there are 366 raster files, for which the file naming
> conventions are not known by us, but some of the files are called
> "max1961_1.asc", "max1961_2.asc", ... "max1961_366.asc" (in case of
> T_max and year 1961)
>
> In this case, the 10980 layer that belongs to T_max can be read to one
> large RasterStack in this way:
> tmax_filenames <- c(outer(X = as.character(1:366), Y =
> as.character(1961:1970), FUN = function(doy, year) paste0("N:/400074
> Conservation des sols et CC/Data/Climate data/Climate-10km/Max_T/",
> year, "/max", year, "_", doy, ".asc")))
> tmax_raster <- stack(tmax_filenames)
>
> You can give self-explanatory names to the raster layers:
> names(tmax_raster) <- c(outer(X = as.character(1:366), Y =
> as.character(1961:1970), FUN = function(doy, year) paste0(year, "_",
> doy)))
>
> But if the structure of the rasters are the same (i.e. the cell size,
> extent, projection), then I recommend you to do the raster-vector
> overlay once, save the cell numbers that you are interested in, and then
> in nested for loops (one loop for the climate variable, one for the year
> and one for the day) read the rasters one-by-one, extract the values
> according to the cell numbers, and save the result in a previously
> created data.frame. In this way, you may not encounter memory issues.
> Although, it will take a lot of time...
>
> HTH,
> Ákos Bede-Fazekas
> Hungarian Academy of Sciences
>
> 2020.09.17. 19:28 keltezéssel, Gaetan Martinelli írta:
> > Hello everyone, R team,
> >
> > Sorry in advance for this long message. Your help will be invaluable.
> >
> > For a few days now i have been blocked to execute a task on R. I will
> > try to synthesize my problem.
> >
> > I have several raster. I have an ASCII file for each day of a year
> > with a single band. For 30 years, and for three climatic variables on
> > grid 10km/10km (T_min, T_max, Precipitation). So i have a total around
> > of 32 940 raster files (366days*30years*3variables).
> >
> > Also, i have a layer of aroud 1000 points.
> >
> > I tried to use the Stack function and then make the intersection for
> > each raster files with my 1000 points.
> > I cannot create an independent matrix for all my files where i applied
> > the "extract" function, to then concatenate all my matrices in order
> > to have a single table.
> >
> > I tried this, exemple for 10 years et only T_Max (my files are
> > organized the same for my two other variables)  :
> > *#Datapoints*
> > Datapoints<-readOGR(dsn="H:/Inventaire/R/final",
> >                layer="Centroid_champs")
> > Datapoints<- spTransform (Datapoints, CRS ("+init=epsg:4326") ) # 1022
> > points in the data
> > st_crs(Datapoints)
> > *#Rasters files*
> > folders = list(
> >   file.path('N:','Data','Climate data','Climate-10km','Max_T','1961'),
> > #Each year includes daily data, the names of my several raster is
> > "max1961_1", "max1961_10", "max1961_100", etc...
> >   file.path('N:','Data','Climate data','Climate-10km','Max_T','1962'),
> >   file.path('N:','Data','Climate data','Climate-10km','Max_T','1963'),
> >   file.path('N:','Data','Climate data','Climate-10km','Max_T','1964'),
> >   file.path('N:','Data','Climate data','Climate-10km','Max_T','1965'),
> >   file.path('N:','Data','Climate data','Climate-10km','Max_T','1966'),
> >   file.path('N:','Data','Climate data','Climate-10km','Max_T','1967'),
> >   file.path('N:','Data','Climate data','Climate-10km','Max_T','1968'),
> >   file.path('N:','Data','Climate data','Climate-10km','Max_T','1969'),
> >   file.path('N:','Data','Climate data','Climate-10km','Max_T','1970')
> > )
> > files = unlist(sapply(folders, function(folder) {
> >   list.files(folder, full.names=TRUE)
> > }))
> > files
> >
> > MET <- lapply(files, raster)
> > s <- raster::stack(MET)
> >
> > output <- list()
> > for(i in 1:length(MET)){
> >   output[[i]] <- extract(s, Datapoints)
> >   names(output)[[i]] <- paste("Année", MET[i], sep = "_")
> > }
> > Also, i tried that :
> > p1 <- 1022 (ID of my DataPoints) ; p2 <- 1 (column where there are the
> > values ​​extracted from my raster) ; p3 <- 3660     # 3660matrix (366
> > day* 10 years)
> > matlist <- list(array(NA,c(p1,p2,p3)))  # doing a list of independant
> > matrix
> >
> > for(i in seq_along(MET)){
> >
> >   matlist[[i]] <- extract(s, Datapoints)
> > }
> > But, nothing works...
> > I would like my script to perform these actions :
> > - For each Raster in my Rasterstack, extract the climatic data values
> > ​​and link them to my "Datapoints",
> > - Take the name of my file, take the first three characters of the
> > name to get a column of my weather variable, here, "T_Max" (column
> > with my raster values) ; Take the following four characters then
> > report this information in a new column "Year", and finally, take the
> > last characters of the file name to create a new column "Day".
> > - Concatenate all the independent output matrices corresponding to
> > each intersection made with my different raster files
> > In the end, I would have a huge table, but one that will allow me to
> > do my analysis :
> > Table with 9 attributes (6 attributs of my points + Year + Day +
> > T_Max) like this :
> > ID Datapoint Year Day T_Max
> > 1 1960 1
> > 2 1960 1
> > …... 1960 1
> > 1022 1960 1
> > 1 1960 2
> > 2 1960 2
> > …... 1960 2
> > 1022 1960 2
> > ….. ….. …..
> > 1 1970 1
> > 2 1970 1
> > …... 1970 1
> > 1022 1970 1
> > 1 1970 2
> > 2 1970 2
> > …... 1970 2
> > 1022 1970 2
> > ….. ….. …..
> >
> > Could a loop do this task ?
> >
> > I'm sorry, i am gradually learning to manipulate R, but this exercise
> > is more difficult than expected...Please feel free to tell me if my
> > question is inappropriate.
> >
> > Thank you very much in advance for your answers. Your help or your
> > comments will be really appreciated.
> >
> > Have a good day.
> >
> > Gaëtan Martinelli
> > Water and Agriculture research professional in Quebec.
> >
> > _______________________________________________
> > R-sig-Geo mailing list
> > [hidden email]
> > https://stat.ethz.ch/mailman/listinfo/r-sig-geo
>
>
> [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-Geo mailing list
> [hidden email]
> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

        [[alternative HTML version deleted]]

_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: Execute Extract function on several Raster with points layer

Thu, 09/17/2020 - 13:40
  Thank you very much for the reply.

It's exactly that. My three main folders are "Max_T", Min_T "and" PCP "(for precipitation), they have subfolders per year, with 366 raster files per year. My rasters all have the same structures.
Thanks for all these elementse, I'll try that.

Having an identical structure for my 30 years and my 3 variables, i will also try the second method. But it will surely be longer because of my memory. From what i understand, i have to create my empty output array before extract.     Envoyé: jeudi 17 septembre 2020 à 13:57
De: "Bede-Fazekas Ákos" <[hidden email]>
À: [hidden email]
Objet: Re: [R-sig-Geo] Execute Extract function on several Raster with points layer Hello Gaëtan,

so as far as I understand, you have 3 main folders:
"Max_T", ? and ?
and in alll the three folders, there are subfolders
"1961", "1962", ... "1970"
In each folder, there are 366 raster files, for which the file naming
conventions are not known by us, but some of the files are called
"max1961_1.asc", "max1961_2.asc", ... "max1961_366.asc" (in case of
T_max and year 1961)  
In this case, the 10980 layer that belongs to T_max can be read to one
large RasterStack in this way:
tmax_filenames <- c(outer(X = as.character(1:366), Y =
as.character(1961:1970), FUN = function(doy, year) paste0("N:/400074
Conservation des sols et CC/Data/Climate data/Climate-10km/Max_T/",
year, "/max", year, "_", doy, ".asc")))
tmax_raster <- stack(tmax_filenames)
You can give self-explanatory names to the raster layers:
names(tmax_raster) <- c(outer(X = as.character(1:366), Y =
as.character(1961:1970), FUN = function(doy, year) paste0(year, "_", doy)))

But if the structure of the rasters are the same (i.e. the cell size,
extent, projection), then I recommend you to do the raster-vector
overlay once, save the cell numbers that you are interested in, and then
in nested for loops (one loop for the climate variable, one for the year
and one for the day) read the rasters one-by-one, extract the values
according to the cell numbers, and save the result in a previously
created data.frame. In this way, you may not encounter memory issues.
Although, it will take a lot of time...

HTH,
Ákos Bede-Fazekas
Hungarian Academy of Sciences

2020.09.17. 19:28 keltezéssel, Gaetan Martinelli írta:
> Hello everyone, R team,
>
> Sorry in advance for this long message. Your help will be invaluable.
>
> For a few days now i have been blocked to execute a task on R. I will
> try to synthesize my problem.
>
> I have several raster. I have an ASCII file for each day of a year
> with a single band. For 30 years, and for three climatic variables on
> grid 10km/10km (T_min, T_max, Precipitation). So i have a total around
> of 32 940 raster files (366days*30years*3variables).
>
> Also, i have a layer of aroud 1000 points.
>
> I tried to use the Stack function and then make the intersection for
> each raster files with my 1000 points.
> I cannot create an independent matrix for all my files where i applied
> the "extract" function, to then concatenate all my matrices in order
> to have a single table.
>
> I tried this, exemple for 10 years et only T_Max (my files are
> organized the same for my two other variables)  :
> *#Datapoints*
> Datapoints<-readOGR(dsn="H:/Inventaire/R/final",
>                layer="Centroid_champs")
> Datapoints<- spTransform (Datapoints, CRS ("+init=epsg:4326") ) # 1022
> points in the data
> st_crs(Datapoints)
> *#Rasters files*
> folders = list(
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1961'),
> #Each year includes daily data, the names of my several raster is
> "max1961_1", "max1961_10", "max1961_100", etc...
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1962'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1963'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1964'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1965'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1966'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1967'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1968'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1969'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1970')
> )
> files = unlist(sapply(folders, function(folder) {
>   list.files(folder, full.names=TRUE)
> }))
> files
>
> MET <- lapply(files, raster)
> s <- raster::stack(MET)
>
> output <- list()
> for(i in 1:length(MET)){
>   output[[i]] <- extract(s, Datapoints)
>   names(output)[[i]] <- paste("Année", MET[i], sep = "_")
> }
> Also, i tried that :
> p1 <- 1022 (ID of my DataPoints) ; p2 <- 1 (column where there are the
> values ​​extracted from my raster) ; p3 <- 3660      # 3660matrix (366
> day* 10 years)
> matlist <- list(array(NA,c(p1,p2,p3)))  # doing a list of independant
> matrix
>
> for(i in seq_along(MET)){
>
>   matlist[[i]] <- extract(s, Datapoints)
> }
> But, nothing works...
> I would like my script to perform these actions :
> - For each Raster in my Rasterstack, extract the climatic data values
> ​​and link them to my "Datapoints",
> - Take the name of my file, take the first three characters of the
> name to get a column of my weather variable, here, "T_Max" (column
> with my raster values) ; Take the following four characters then
> report this information in a new column "Year", and finally, take the
> last characters of the file name to create a new column "Day".
> - Concatenate all the independent output matrices corresponding to
> each intersection made with my different raster files
> In the end, I would have a huge table, but one that will allow me to
> do my analysis :
> Table with 9 attributes (6 attributs of my points + Year + Day +
> T_Max) like this :
> ID Datapoint Year Day T_Max
> 1 1960 1
> 2 1960 1
> …... 1960 1
> 1022 1960 1
> 1 1960 2
> 2 1960 2
> …... 1960 2
> 1022 1960 2
> ….. ….. …..
> 1 1970 1
> 2 1970 1
> …... 1970 1
> 1022 1970 1
> 1 1970 2
> 2 1970 2
> …... 1970 2
> 1022 1970 2
> ….. ….. …..
>
> Could a loop do this task ?
>
> I'm sorry, i am gradually learning to manipulate R, but this exercise
> is more difficult than expected...Please feel free to tell me if my
> question is inappropriate.
>
> Thank you very much in advance for your answers. Your help or your
> comments will be really appreciated.
>
> Have a good day.
>
> Gaëtan Martinelli
> Water and Agriculture research professional in Quebec.
>
> _______________________________________________
> R-sig-Geo mailing list
> [hidden email]
> https://stat.ethz.ch/mailman/listinfo/r-sig-geo


[[alternative HTML version deleted]]

_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo
_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: Execute Extract function on several Raster with points layer

Thu, 09/17/2020 - 12:57
Hello Gaëtan,

so as far as I understand, you have 3 main folders:
"Max_T", ? and ?
and in alll the three folders, there are subfolders
"1961", "1962", ... "1970"
In each folder, there are 366 raster files, for which the file naming
conventions are not known by us, but some of the files are called
"max1961_1.asc", "max1961_2.asc", ... "max1961_366.asc" (in case of
T_max and year 1961)

In this case, the 10980 layer that belongs to T_max can be read to one
large RasterStack in this way:
tmax_filenames <- c(outer(X = as.character(1:366), Y =
as.character(1961:1970), FUN = function(doy, year) paste0("N:/400074
Conservation des sols et CC/Data/Climate data/Climate-10km/Max_T/",
year, "/max", year, "_", doy, ".asc")))
tmax_raster <- stack(tmax_filenames)

You can give self-explanatory names to the raster layers:
names(tmax_raster) <- c(outer(X = as.character(1:366), Y =
as.character(1961:1970), FUN = function(doy, year) paste0(year, "_", doy)))

But if the structure of the rasters are the same (i.e. the cell size,
extent, projection), then I recommend you to do the raster-vector
overlay once, save the cell numbers that you are interested in, and then
in nested for loops (one loop for the climate variable, one for the year
and one for the day) read the rasters one-by-one, extract the values
according to the cell numbers, and save the result in a previously
created data.frame. In this way, you may not encounter memory issues.
Although, it will take a lot of time...

HTH,
Ákos Bede-Fazekas
Hungarian Academy of Sciences

2020.09.17. 19:28 keltezéssel, Gaetan Martinelli írta:
> Hello everyone, R team,
>
> Sorry in advance for this long message. Your help will be invaluable.
>
> For a few days now i have been blocked to execute a task on R. I will
> try to synthesize my problem.
>
> I have several raster. I have an ASCII file for each day of a year
> with a single band. For 30 years, and for three climatic variables on
> grid 10km/10km (T_min, T_max, Precipitation). So i have a total around
> of 32 940 raster files (366days*30years*3variables).
>
> Also, i have a layer of aroud 1000 points.
>
> I tried to use the Stack function and then make the intersection for
> each raster files with my 1000 points.
> I cannot create an independent matrix for all my files where i applied
> the "extract" function, to then concatenate all my matrices in order
> to have a single table.
>
> I tried this, exemple for 10 years et only T_Max (my files are
> organized the same for my two other variables)  :
> *#Datapoints*
> Datapoints<-readOGR(dsn="H:/Inventaire/R/final",
>                layer="Centroid_champs")
> Datapoints<- spTransform (Datapoints, CRS ("+init=epsg:4326") ) # 1022
> points in the data
> st_crs(Datapoints)
> *#Rasters files*
> folders = list(
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1961'),
> #Each year includes daily data, the names of my several raster is
> "max1961_1", "max1961_10", "max1961_100", etc...
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1962'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1963'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1964'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1965'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1966'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1967'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1968'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1969'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1970')
> )
> files = unlist(sapply(folders, function(folder) {
>   list.files(folder, full.names=TRUE)
> }))
> files
>
> MET <- lapply(files, raster)
> s <- raster::stack(MET)
>
> output <- list()
> for(i in 1:length(MET)){
>   output[[i]] <- extract(s, Datapoints)
>   names(output)[[i]] <- paste("Année", MET[i], sep = "_")
> }
> Also, i tried that :
> p1 <- 1022 (ID of my DataPoints) ; p2 <- 1 (column where there are the
> values ​​extracted from my raster) ; p3 <- 3660      # 3660matrix (366
> day* 10 years)
> matlist <- list(array(NA,c(p1,p2,p3)))  # doing a list of independant
> matrix
>
> for(i in seq_along(MET)){
>
>   matlist[[i]] <- extract(s, Datapoints)
> }
> But, nothing works...
> I would like my script to perform these actions :
> - For each Raster in my Rasterstack, extract the climatic data values
> ​​and link them to my "Datapoints",
> - Take the name of my file, take the first three characters of the
> name to get a column of my weather variable, here, "T_Max" (column
> with my raster values) ; Take the following four characters then
> report this information in a new column "Year", and finally, take the
> last characters of the file name to create a new column "Day".
> - Concatenate all the independent output matrices corresponding to
> each intersection made with my different raster files
> In the end, I would have a huge table, but one that will allow me to
> do my analysis :
> Table with 9 attributes (6 attributs of my points + Year + Day +
> T_Max) like this :
> ID Datapoint Year Day T_Max
> 1 1960 1
> 2 1960 1
> …... 1960 1
> 1022 1960 1
> 1 1960 2
> 2 1960 2
> …... 1960 2
> 1022 1960 2
> ….. ….. …..
> 1 1970 1
> 2 1970 1
> …... 1970 1
> 1022 1970 1
> 1 1970 2
> 2 1970 2
> …... 1970 2
> 1022 1970 2
> ….. ….. …..
>
> Could a loop do this task ?
>
> I'm sorry, i am gradually learning to manipulate R, but this exercise
> is more difficult than expected...Please feel free to tell me if my
> question is inappropriate.
>
> Thank you very much in advance for your answers. Your help or your
> comments will be really appreciated.
>
> Have a good day.
>
> Gaëtan Martinelli
> Water and Agriculture research professional in Quebec.
>
> _______________________________________________
> R-sig-Geo mailing list
> [hidden email]
> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

        [[alternative HTML version deleted]]

_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Execute Extract function on several Raster with points layer

Thu, 09/17/2020 - 12:28
Hello everyone, R team, 

Sorry in advance for this long message. Your help will be invaluable.

For a few days now i have been blocked to execute a task on R. I will try to synthesize my problem.

I have several raster. I have an ASCII file for each day of a year with a single band. For 30 years, and for three climatic variables on grid 10km/10km (T_min, T_max, Precipitation). So i have a total around of 32 940 raster files (366days*30years*3variables).

Also, i have a layer of aroud 1000 points.
I tried to use the Stack function and then make the intersection for each raster files with my 1000 points. I cannot create an independent matrix for all my files where i applied the "extract" function, to then concatenate all my matrices in order to have a single table.  
I tried this, exemple for 10 years et only T_Max (my files are organized the same for my two other variables)  :   #Datapoints   Datapoints<-readOGR(dsn="H:/Inventaire/R/final",
               layer="Centroid_champs")   Datapoints<- spTransform (Datapoints, CRS ("+init=epsg:4326") ) # 1022 points in the data st_crs(Datapoints)     #Rasters files   folders = list(
  file.path('N:','Data','Climate data','Climate-10km','Max_T','1961'), #Each year includes daily data, the names of my several raster is "max1961_1", "max1961_10", "max1961_100", etc...
  file.path('N:','Data','Climate data','Climate-10km','Max_T','1962'),
  file.path('N:','Data','Climate data','Climate-10km','Max_T','1963'),
  file.path('N:','Data','Climate data','Climate-10km','Max_T','1964'),
  file.path('N:','Data','Climate data','Climate-10km','Max_T','1965'),
  file.path('N:','Data','Climate data','Climate-10km','Max_T','1966'),
  file.path('N:','Data','Climate data','Climate-10km','Max_T','1967'),
  file.path('N:','Data','Climate data','Climate-10km','Max_T','1968'),
  file.path('N:','Data','Climate data','Climate-10km','Max_T','1969'),
  file.path('N:','Data','Climate data','Climate-10km','Max_T','1970') )
  files = unlist(sapply(folders, function(folder) {
  list.files(folder, full.names=TRUE)
}))
files
MET <- lapply(files, raster) s <- raster::stack(MET)

output <- list()
for(i in 1:length(MET)){
  output[[i]] <- extract(s, Datapoints)
  names(output)[[i]] <- paste("Année", MET[i], sep = "_")
}       Also, i tried that : 
  p1 <- 1022 (ID of my DataPoints) ; p2 <- 1 (column where there are the values ​​extracted from my raster) ; p3 <- 3660      # 3660matrix (366 day* 10 years)
matlist <- list(array(NA,c(p1,p2,p3)))  # doing a list of independant matrix
for(i in seq_along(MET)){
  
  matlist[[i]] <- extract(s, Datapoints)
}       But, nothing works...
I would like my script to perform these actions :   - For each Raster in my Rasterstack, extract the climatic data values ​​and link them to my "Datapoints", - Take the name of my file, take the first three characters of the name to get a column of my weather variable, here, "T_Max" (column with my raster values) ; Take the following four characters then report this information in a new column "Year", and finally, take the last characters of the file name to create a new column "Day".
- Concatenate all the independent output matrices corresponding to each intersection made with my different raster files     In the end, I would have a huge table, but one that will allow me to do my analysis :    Table with 9 attributes (6 attributs of my points + Year + Day + T_Max) like this :     ID Datapoint Year Day T_Max 1 1960 1   2 1960 1   …... 1960 1   1022 1960 1   1 1960 2   2 1960 2   …... 1960 2   1022 1960 2   ….. ….. …..   1 1970 1   2 1970 1   …... 1970 1   1022 1970 1   1 1970 2   2 1970 2   …... 1970 2   1022 1970 2   ….. ….. …..       Could a loop do this task ?

I'm sorry, i am gradually learning to manipulate R, but this exercise is more difficult than expected...Please feel free to tell me if my question is inappropriate.  
Thank you very much in advance for your answers. Your help or your comments will be really appreciated.

Have a good day.  
Gaëtan Martinelli
Water and Agriculture research professional in Quebec.      
_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: How to cluster the standard errors in the SPLM function?

Wed, 09/16/2020 - 07:29
Thank you a lot Mathias.

I hope this could help more people.

Have a good day.
________________________________
De: R-sig-Geo <[hidden email]> em nome de Mathias Moser <[hidden email]>
Enviado: terça-feira, 15 de setembro de 2020 15:24
Para: [hidden email] <[hidden email]>
Assunto: Re: [R-sig-Geo] How to cluster the standard errors in the SPLM function?

Pietro (and anyone else interested): the Conley SE code is still
available on Darin Christensen's Github:
https://github.com/darinchristensen/conley-se

BR, Mathias

On Tue, 2020-09-15 at 15:37 +0200, Roger Bivand wrote:
> Please do not repeat messages, it does not help.
>
> Did you provide a reproducible example, perhaps from plm? Did you
> read the
> code in splm, for example on R-Forge, or check the development
> version on
> R-Forge
> https://r-forge.r-project.org/R/?group_id=352
> ,
> install.packages("splm", repos="
> http://R-Forge.R-project.org
> ")?
>
> Did you reference any articles showing how this approach might be
> implemented? Do you know whether any such code exists? Are you
> thinking of
> Conley approaches? Such as:
> http://www.trfetzer.com/using-r-to-estimate-spatial-hac-errors-per-conley/
>
> ? Unfortunately, the dropbox link is now stale.
>
> Please report back on your progress, contact the splm maintainer to
> offer
> ideas or assistance, and anyway provide a reproducible example and
> the
> references you are using.
>
> Hope this helps,
>
> Roger
>
> On Tue, 15 Sep 2020, Pietro Andre Telatin Paschoalino wrote:
>
> > Hello everyone,
> >
> > Could someone help me with splm (Spatial Panel Model By Maximum
> > Likelihood) in R?
> >
> > I want to know if is possible to cluster the standard errors by my
> > individuals (like as in plm function). After a lot of research a
> > found that there are more people with the same doubt, you can see
> > this here, the person has the same problem as me:
> >
> > https://stackoverflow.com/questions/36869932/clustered-standard-errors-in-spatial-panel-linear-models-splm
> >
> >
> > Thank you all.
> >
> > [
> > https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-icon@...?v=73d79a89bded]<https://stackoverflow.com/questions/36869932/clustered-standard-errors-in-spatial-panel-linear-models-splm>
> >
> >
> > Pietro Andre Telatin Paschoalino
> > Doutorando em Ci�ncias Econ�micas da Universidade Estadual de
> > Maring� - PCE.
> >
> > [
> > https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif]<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon>
> >     Virus-free.
> > www.avast.com<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link>
> >
> >
> >      [[alternative HTML version deleted]]
> >
> >
>
> _______________________________________________
> R-sig-Geo mailing list
> [hidden email]
>
> https://stat.ethz.ch/mailman/listinfo/r-sig-geo
>
>
_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

        [[alternative HTML version deleted]]

_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: Filtering a set of points in a "ppp" object by distance using marks

Wed, 09/16/2020 - 07:05
Hi Marcelino,

Thanks so much, I just make a little change in your code:

ddd <- nndist(insects.ppp, by=factor(insects.ppp$marks))
subset(insects.ppp,  marks=="termiNests" & ddd[,"antNests"] >20)

I put `ddd[,"antNests"] >20` despite `ddd[,"termiNests"] >20` because I
need "termiNests" mark far 20 units to each "antNests".

Best wishes,

Alexandre

--
Alexandre dos Santos
Geotechnologies and Spatial Statistics applied to Forest Entomology
Instituto Federal de Mato Grosso (IFMT) - Campus Caceres
Caixa Postal 244 (PO Box)
Avenida dos Ramires, s/n - Vila Real
Caceres - MT - CEP 78201-380 (ZIP code)
Phone: (+55) 65 99686-6970 / (+55) 65 3221-2674
Lattes CV: http://lattes.cnpq.br/1360403201088680
OrcID: orcid.org/0000-0001-8232-6722
ResearchGate: www.researchgate.net/profile/Alexandre_Santos10
Publons: https://publons.com/researcher/3085587/alexandre-dos-santos/
--

Em 16/09/2020 03:18, Marcelino de la Cruz Rot escreveu:
> Hi Alexandre,
>
> may be this?
>
>
> ddd <- nndist(insects.ppp, by=factor(insects.ppp$marks))
> subset(insects.ppp,  marks=="termiNests" & ddd[,"termiNests"] >20)
>
>
> Cheers,
>
> Marcelino
>
>
> El 15/09/2020 a las 22:52, ASANTOS via R-sig-Geo escribió:
>> Dear R-Sig-Geo Members,
>>
>> I'd like to find any way to filtering a set of points in a "ppp"
>> object by minimum distance just only between different marks. In my
>> example:
>>
>> #Package
>> library(spatstat)
>>
>> #Point process example - ants
>> data(ants)
>> ants.ppp<-ppp(x=ants$x,y=ants$y,marks=rep("antNests",length(ants$x)),window=Window(ants))
>>
>>
>>
>> # Create a artificial point pattern - termites
>> termites <- rpoispp(0.0005, win=Window(ants))
>> termites.ppp<-ppp(x=termites$x,y=termites$y,marks=rep("termiNests",length(termites$x)),window=Window(ants))
>>
>>
>>
>> #Join ants.ppp and termites.ppp
>> insects.ppp<-superimpose(ants.ppp,termites.ppp)
>>
>>
>> #If I try to use subset function:
>>
>> subset(insects.ppp, pairdist(insects.ppp) > 20 & marks=="termiNests")
>>
>> #Marked planar point pattern: 223 points #marks are of storage type
>> �character� #window: polygonal boundary #enclosing rectangle: [-25, 803]
>> x [-49, 717] units (one unit = 0.5 feet) #Warning message: #In ppp(X[,
>> 1], X[, 2], window = win, marks = marx, check = check) : # 70751 out of
>> 70974 points had NA or NaN coordinate values, and were discarded
>>
>> Not the desirable result yet, because I'd like to calculate just only
>> the > 20 "termiNests" to "antNests" marks and not the "termiNests"
>> with "termiNests" too.
>>
>> Please any ideas?
>>
>> Thanks in advanced,
>>
>
_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: Filtering a set of points in a "ppp" object by distance using marks

Wed, 09/16/2020 - 07:04
Hi Marcelino,

Thanks, I just make a little change in your code:

ddd <- nndist(insects.ppp, by=factor(insects.ppp$marks))
subset(insects.ppp,  marks=="termiNests" & ddd[,"antNests"] >20)

I put `ddd[,"antNests"] >20` despite `ddd[,"termiNests"] >20` because I
need "termiNests" mark far 20 units to each "antNests"

Best wishes,

Alexandre

--
Alexandre dos Santos
Geotechnologies and Spatial Statistics applied to Forest Entomology
Instituto Federal de Mato Grosso (IFMT) - Campus Caceres
Caixa Postal 244 (PO Box)
Avenida dos Ramires, s/n - Vila Real
Caceres - MT - CEP 78201-380 (ZIP code)
Phone: (+55) 65 99686-6970 / (+55) 65 3221-2674
Lattes CV: http://lattes.cnpq.br/1360403201088680
OrcID: orcid.org/0000-0001-8232-6722
ResearchGate: www.researchgate.net/profile/Alexandre_Santos10
Publons: https://publons.com/researcher/3085587/alexandre-dos-santos/
--

Em 16/09/2020 03:18, Marcelino de la Cruz Rot escreveu:
> Hi Alexandre,
>
> may be this?
>
>
> ddd <- nndist(insects.ppp, by=factor(insects.ppp$marks))
> subset(insects.ppp,  marks=="termiNests" & ddd[,"termiNests"] >20)
>
>
> Cheers,
>
> Marcelino
>
>
> El 15/09/2020 a las 22:52, ASANTOS via R-sig-Geo escribió:
>> Dear R-Sig-Geo Members,
>>
>> I'd like to find any way to filtering a set of points in a "ppp"
>> object by minimum distance just only between different marks. In my
>> example:
>>
>> #Package
>> library(spatstat)
>>
>> #Point process example - ants
>> data(ants)
>> ants.ppp<-ppp(x=ants$x,y=ants$y,marks=rep("antNests",length(ants$x)),window=Window(ants))
>>
>>
>>
>> # Create a artificial point pattern - termites
>> termites <- rpoispp(0.0005, win=Window(ants))
>> termites.ppp<-ppp(x=termites$x,y=termites$y,marks=rep("termiNests",length(termites$x)),window=Window(ants))
>>
>>
>>
>> #Join ants.ppp and termites.ppp
>> insects.ppp<-superimpose(ants.ppp,termites.ppp)
>>
>>
>> #If I try to use subset function:
>>
>> subset(insects.ppp, pairdist(insects.ppp) > 20 & marks=="termiNests")
>>
>> #Marked planar point pattern: 223 points #marks are of storage type
>> �character� #window: polygonal boundary #enclosing rectangle: [-25, 803]
>> x [-49, 717] units (one unit = 0.5 feet) #Warning message: #In ppp(X[,
>> 1], X[, 2], window = win, marks = marx, check = check) : # 70751 out of
>> 70974 points had NA or NaN coordinate values, and were discarded
>>
>> Not the desirable result yet, because I'd like to calculate just only
>> the > 20 "termiNests" to "antNests" marks and not the "termiNests"
>> with "termiNests" too.
>>
>> Please any ideas?
>>
>> Thanks in advanced,
>>
>
_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: Filtering a set of points in a "ppp" object by distance using marks

Wed, 09/16/2020 - 05:15
Sorry, I meant to say

subset(insects.ppp, marks=="termiNests" & ddd[,"antNests"] >20)

El 16/09/2020 a las 9:18, Marcelino de la Cruz Rot escribió:
> Hi Alexandre,
>
> may be this?
>
>
> ddd <- nndist(insects.ppp, by=factor(insects.ppp$marks))
> subset(insects.ppp,  marks=="termiNests" & ddd[,"termiNests"] >20)
>
>
> Cheers,
>
> Marcelino
>
>
> El 15/09/2020 a las 22:52, ASANTOS via R-sig-Geo escribió:
>> Dear R-Sig-Geo Members,
>>
>> I'd like to find any way to filtering a set of points in a "ppp"
>> object by minimum distance just only between different marks. In my
>> example:
>>
>> #Package
>> library(spatstat)
>>
>> #Point process example - ants
>> data(ants)
>> ants.ppp<-ppp(x=ants$x,y=ants$y,marks=rep("antNests",length(ants$x)),window=Window(ants))
>>
>>
>>
>> # Create a artificial point pattern - termites
>> termites <- rpoispp(0.0005, win=Window(ants))
>> termites.ppp<-ppp(x=termites$x,y=termites$y,marks=rep("termiNests",length(termites$x)),window=Window(ants))
>>
>>
>>
>> #Join ants.ppp and termites.ppp
>> insects.ppp<-superimpose(ants.ppp,termites.ppp)
>>
>>
>> #If I try to use subset function:
>>
>> subset(insects.ppp, pairdist(insects.ppp) > 20 & marks=="termiNests")
>>
>> #Marked planar point pattern: 223 points #marks are of storage type
>> �character� #window: polygonal boundary #enclosing rectangle: [-25, 803]
>> x [-49, 717] units (one unit = 0.5 feet) #Warning message: #In ppp(X[,
>> 1], X[, 2], window = win, marks = marx, check = check) : # 70751 out of
>> 70974 points had NA or NaN coordinate values, and were discarded
>>
>> Not the desirable result yet, because I'd like to calculate just only
>> the > 20 "termiNests" to "antNests" marks and not the "termiNests"
>> with "termiNests" too.
>>
>> Please any ideas?
>>
>> Thanks in advanced,
>>
>
--
Marcelino de la Cruz Rot
Depto. de Biología y Geología
Física y Química Inorgánica
Universidad Rey Juan Carlos
Móstoles España

_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: Filtering a set of points in a "ppp" object by distance using marks

Wed, 09/16/2020 - 02:18
Hi Alexandre,

may be this?


ddd <- nndist(insects.ppp, by=factor(insects.ppp$marks))
subset(insects.ppp,  marks=="termiNests" & ddd[,"termiNests"] >20)


Cheers,

Marcelino


El 15/09/2020 a las 22:52, ASANTOS via R-sig-Geo escribió:
> Dear R-Sig-Geo Members,
>
> I'd like to find any way to filtering a set of points in a "ppp" object by minimum distance just only between different marks. In my example:
>
> #Package
> library(spatstat)
>
> #Point process example - ants
> data(ants)
> ants.ppp<-ppp(x=ants$x,y=ants$y,marks=rep("antNests",length(ants$x)),window=Window(ants))
>
>
> # Create a artificial point pattern - termites
> termites <- rpoispp(0.0005, win=Window(ants))
> termites.ppp<-ppp(x=termites$x,y=termites$y,marks=rep("termiNests",length(termites$x)),window=Window(ants))
>
>
> #Join ants.ppp and termites.ppp
> insects.ppp<-superimpose(ants.ppp,termites.ppp)
>
>
> #If I try to use subset function:
>
> subset(insects.ppp, pairdist(insects.ppp) > 20 & marks=="termiNests")
>
> #Marked planar point pattern: 223 points #marks are of storage type
> �character� #window: polygonal boundary #enclosing rectangle: [-25, 803]
> x [-49, 717] units (one unit = 0.5 feet) #Warning message: #In ppp(X[,
> 1], X[, 2], window = win, marks = marx, check = check) : # 70751 out of
> 70974 points had NA or NaN coordinate values, and were discarded
>
> Not the desirable result yet, because I'd like to calculate just only the > 20 "termiNests" to "antNests" marks and not the "termiNests" with "termiNests" too.
>
> Please any ideas?
>
> Thanks in advanced,
>
--
Marcelino de la Cruz Rot
Depto. de Biología y Geología
Física y Química Inorgánica
Universidad Rey Juan Carlos
Móstoles España

_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Filtering a set of points in a "ppp" object by distance using marks

Tue, 09/15/2020 - 15:52
Dear R-Sig-Geo Members,

I'd like to find any way to filtering a set of points in a "ppp" object by minimum distance just only between different marks. In my example:

#Package
library(spatstat)

#Point process example - ants
data(ants)
ants.ppp<-ppp(x=ants$x,y=ants$y,marks=rep("antNests",length(ants$x)),window=Window(ants))


# Create a artificial point pattern - termites
termites <- rpoispp(0.0005, win=Window(ants))
termites.ppp<-ppp(x=termites$x,y=termites$y,marks=rep("termiNests",length(termites$x)),window=Window(ants))


#Join ants.ppp and termites.ppp
insects.ppp<-superimpose(ants.ppp,termites.ppp)


#If I try to use subset function:

subset(insects.ppp, pairdist(insects.ppp) > 20 & marks=="termiNests")

#Marked planar point pattern: 223 points #marks are of storage type
�character� #window: polygonal boundary #enclosing rectangle: [-25, 803]
x [-49, 717] units (one unit = 0.5 feet) #Warning message: #In ppp(X[,
1], X[, 2], window = win, marks = marx, check = check) : # 70751 out of
70974 points had NA or NaN coordinate values, and were discarded

Not the desirable result yet, because I'd like to calculate just only the > 20 "termiNests" to "antNests" marks and not the "termiNests" with "termiNests" too.

Please any ideas?

Thanks in advanced,

--
Alexandre dos Santos
Geotechnologies and Spatial Statistics applied to Forest Entomology
Instituto Federal de Mato Grosso (IFMT) - Campus Caceres
Caixa Postal 244 (PO Box)
Avenida dos Ramires, s/n - Vila Real
Caceres - MT - CEP 78201-380 (ZIP code)
Phone: (+55) 65 99686-6970 / (+55) 65 3221-2674
Lattes CV: http://lattes.cnpq.br/1360403201088680
OrcID: orcid.org/0000-0001-8232-6722
ResearchGate: www.researchgate.net/profile/Alexandre_Santos10
Publons: https://publons.com/researcher/3085587/alexandre-dos-santos/
--


        [[alternative HTML version deleted]]

_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: How to cluster the standard errors in the SPLM function?

Tue, 09/15/2020 - 13:24
Pietro (and anyone else interested): the Conley SE code is still
available on Darin Christensen's Github:
https://github.com/darinchristensen/conley-se

BR, Mathias

On Tue, 2020-09-15 at 15:37 +0200, Roger Bivand wrote:
> Please do not repeat messages, it does not help.
>
> Did you provide a reproducible example, perhaps from plm? Did you
> read the
> code in splm, for example on R-Forge, or check the development
> version on
> R-Forge
> https://r-forge.r-project.org/R/?group_id=352
> ,
> install.packages("splm", repos="
> http://R-Forge.R-project.org
> ")?
>
> Did you reference any articles showing how this approach might be
> implemented? Do you know whether any such code exists? Are you
> thinking of
> Conley approaches? Such as:
> http://www.trfetzer.com/using-r-to-estimate-spatial-hac-errors-per-conley/
>  
> ? Unfortunately, the dropbox link is now stale.
>
> Please report back on your progress, contact the splm maintainer to
> offer
> ideas or assistance, and anyway provide a reproducible example and
> the
> references you are using.
>
> Hope this helps,
>
> Roger
>
> On Tue, 15 Sep 2020, Pietro Andre Telatin Paschoalino wrote:
>
> > Hello everyone,
> >
> > Could someone help me with splm (Spatial Panel Model By Maximum
> > Likelihood) in R?
> >
> > I want to know if is possible to cluster the standard errors by my
> > individuals (like as in plm function). After a lot of research a
> > found that there are more people with the same doubt, you can see
> > this here, the person has the same problem as me:
> >
> > https://stackoverflow.com/questions/36869932/clustered-standard-errors-in-spatial-panel-linear-models-splm
> >
> >
> > Thank you all.
> >
> > [
> > https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-icon@...?v=73d79a89bded]<https://stackoverflow.com/questions/36869932/clustered-standard-errors-in-spatial-panel-linear-models-splm>
> >
> >
> > Pietro Andre Telatin Paschoalino
> > Doutorando em Ci�ncias Econ�micas da Universidade Estadual de
> > Maring� - PCE.
> >
> > [
> > https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif]<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon>
> >     Virus-free.
> > www.avast.com<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link>
> >
> >
> > [[alternative HTML version deleted]]
> >
> >
>
> _______________________________________________
> R-sig-Geo mailing list
> [hidden email]
>
> https://stat.ethz.ch/mailman/listinfo/r-sig-geo
>
>
_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: How to cluster the standard errors in the SPLM function?

Tue, 09/15/2020 - 09:11
Hello Roger, thank you for your answer.

Yes, from plm I could compute by:

coeftest(x, vcovHC(x, type = ""))

But it does not work with the splm.

I don't find how this approach can be implemented in articles, related to R. But in Stata with the function xsmle, it is possible to estimate the way I want. I don't want to change the software, but it is an option.

Thank you for the idea to use conley SHAC, it is a question that I need to think about.

Again, thank you very much for your help, if I can't solve it for sure I will put a reproducible example.

Pietro Andre Telatin Paschoalino
Doutorando em Ciências Econômicas da Universidade Estadual de Maringá - PCE.

________________________________
De: Roger Bivand <[hidden email]>
Enviado: terça-feira, 15 de setembro de 2020 10:37
Para: Pietro Andre Telatin Paschoalino <[hidden email]>
Cc: [hidden email] <[hidden email]>
Assunto: Re: [R-sig-Geo] How to cluster the standard errors in the SPLM function?

Please do not repeat messages, it does not help.

Did you provide a reproducible example, perhaps from plm? Did you read the
code in splm, for example on R-Forge, or check the development version on
R-Forge https://r-forge.r-project.org/R/?group_id=352,
install.packages("splm", repos="http://R-Forge.R-project.org")?

Did you reference any articles showing how this approach might be
implemented? Do you know whether any such code exists? Are you thinking of
Conley approaches? Such as:
http://www.trfetzer.com/using-r-to-estimate-spatial-hac-errors-per-conley/
? Unfortunately, the dropbox link is now stale.

Please report back on your progress, contact the splm maintainer to offer
ideas or assistance, and anyway provide a reproducible example and the
references you are using.

Hope this helps,

Roger

On Tue, 15 Sep 2020, Pietro Andre Telatin Paschoalino wrote:

> Hello everyone,
>
> Could someone help me with splm (Spatial Panel Model By Maximum Likelihood) in R?
>
> I want to know if is possible to cluster the standard errors by my individuals (like as in plm function). After a lot of research a found that there are more people with the same doubt, you can see this here, the person has the same problem as me:
>
> https://stackoverflow.com/questions/36869932/clustered-standard-errors-in-spatial-panel-linear-models-splm
>
> Thank you all.
>
> [https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-icon@...?v=73d79a89bded]<https://stackoverflow.com/questions/36869932/clustered-standard-errors-in-spatial-panel-linear-models-splm>
>
> Pietro Andre Telatin Paschoalino
> Doutorando em Ci�ncias Econ�micas da Universidade Estadual de Maring� - PCE.
>
> [https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif]<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon>    Virus-free. www.avast.com<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link>
>
>        [[alternative HTML version deleted]]
>
>
--
Roger Bivand
Department of Economics, Norwegian School of Economics,
Helleveien 30, N-5045 Bergen, Norway.
voice: +47 55 95 93 55; e-mail: [hidden email]
https://orcid.org/0000-0003-2392-6140
https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

[https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif]<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon>    Virus-free. www.avast.com<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link>

        [[alternative HTML version deleted]]

_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: "no applicable method" for focal() function in raster

Tue, 09/15/2020 - 08:45
Hello,

Unfortunately your email client formatted your message in HTML so the
content has been scrambled.  Best results experienced when the client is
configured to send plain text.

I think that issues is that lsm_l_condent() is expecting a raster (or
similar input).  See
https://cran.r-project.org/web/packages/landscapemetrics/landscapemetrics.pdf

Raster's focal() function slides along the raster pulling out cell values
(coincident with your focal window) and passes them to the function you
specify.  So as the docs for raster::focal descriibe, that function must be
configured to "take multiple numbers, and return a single number."
 landscapremetrics::lsm_l_condent is not configured that way.

Perhaps you need to create your own version of the function that just
operates on an input of numbers rather than an input of raster-like objects?

Cheers,
Ben

On Mon, Sep 14, 2020 at 4:42 PM Jaime Burbano Girón <[hidden email]>
wrote:

> Hi everyone,
>
> I want to apply a moving window (3x3) to estimate conditional entropy
> (Nowosad & Stepinsky, 2019) over a heterogeneous landscape:
> *entropy=function(r){*
>
>
>
>
> *  entropy=lsm_l_condent(r, neighbourhood = 4, ordered = TRUE, base =
> "log2")  return(entropy$value)}w=matrix(1,3,3)result=focal(r, w,
> fun=entropy)*
>
> However, I get this error:
> *Error in .focal_fun(values(x), w, as.integer(dim(out)), runfun, NAonly) :
> *
> *Evaluation error: no applicable method for 'lsm_l_condent' applied to an
> object of class "c('double', 'numeric')".*
>
> But, when I run entropy function in the entire landscape it works:
>
> *> entropy(r)[1] 2.178874*
>
> *r* is a INT4U raster object:
>
>
>
>
>
>
> *class      : RasterLayer dimensions : 886, 999, 885114  (nrow, ncol,
> ncell)resolution : 300, 300  (x, y)extent     : 934805.7, 1234506, 1006566,
> 1272366  (xmin, xmax, ymin, ymax)crs        : +proj=tmerc
> +lat_0=4.59620041666667 +lon_0=-74.0775079166667 +k=1 +x_0=1000000
> +y_0=1000000 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs values
>   : 99, 321113  (min, max)*
>
> There is any idea to solve the "no applicable method" error? Or any idea to
> estimate conditional entropy applying a moving window?
>
> Thanks in advance for the help.
>
> Best,
>
> Jaime
>
>         [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-Geo mailing list
> [hidden email]
> https://stat.ethz.ch/mailman/listinfo/r-sig-geo
>

--
Ben Tupper
Bigelow Laboratory for Ocean Science
East Boothbay, Maine
http://www.bigelow.org/
https://eco.bigelow.org

        [[alternative HTML version deleted]]

_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: How to cluster the standard errors in the SPLM function?

Tue, 09/15/2020 - 08:37
Please do not repeat messages, it does not help.

Did you provide a reproducible example, perhaps from plm? Did you read the
code in splm, for example on R-Forge, or check the development version on
R-Forge https://r-forge.r-project.org/R/?group_id=352,
install.packages("splm", repos="http://R-Forge.R-project.org")?

Did you reference any articles showing how this approach might be
implemented? Do you know whether any such code exists? Are you thinking of
Conley approaches? Such as:
http://www.trfetzer.com/using-r-to-estimate-spatial-hac-errors-per-conley/ 
? Unfortunately, the dropbox link is now stale.

Please report back on your progress, contact the splm maintainer to offer
ideas or assistance, and anyway provide a reproducible example and the
references you are using.

Hope this helps,

Roger

On Tue, 15 Sep 2020, Pietro Andre Telatin Paschoalino wrote:

> Hello everyone,
>
> Could someone help me with splm (Spatial Panel Model By Maximum Likelihood) in R?
>
> I want to know if is possible to cluster the standard errors by my individuals (like as in plm function). After a lot of research a found that there are more people with the same doubt, you can see this here, the person has the same problem as me:
>
> https://stackoverflow.com/questions/36869932/clustered-standard-errors-in-spatial-panel-linear-models-splm
>
> Thank you all.
>
> [https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-icon@...?v=73d79a89bded]<https://stackoverflow.com/questions/36869932/clustered-standard-errors-in-spatial-panel-linear-models-splm>
>
> Pietro Andre Telatin Paschoalino
> Doutorando em Ci�ncias Econ�micas da Universidade Estadual de Maring� - PCE.
>
> [https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif]<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon>    Virus-free. www.avast.com<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link>
>
> [[alternative HTML version deleted]]
>
> --
Roger Bivand
Department of Economics, Norwegian School of Economics,
Helleveien 30, N-5045 Bergen, Norway.
voice: +47 55 95 93 55; e-mail: [hidden email]
https://orcid.org/0000-0003-2392-6140
https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en
_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo
Roger Bivand
Department of Economics
Norwegian School of Economics
Helleveien 30
N-5045 Bergen, Norway

How to cluster the standard errors in the SPLM function?

Tue, 09/15/2020 - 08:18
Hello everyone,

Could someone help me with splm (Spatial Panel Model By Maximum Likelihood) in R?

I want to know if is possible to cluster the standard errors by my individuals (like as in plm function). After a lot of research a found that there are more people with the same doubt, you can see this here, the person has the same problem as me:

https://stackoverflow.com/questions/36869932/clustered-standard-errors-in-spatial-panel-linear-models-splm

Thank you all.

[https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-icon@...?v=73d79a89bded]<https://stackoverflow.com/questions/36869932/clustered-standard-errors-in-spatial-panel-linear-models-splm>

Pietro Andre Telatin Paschoalino
Doutorando em Ci�ncias Econ�micas da Universidade Estadual de Maring� - PCE.

[https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif]<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon>    Virus-free. www.avast.com<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link>

        [[alternative HTML version deleted]]


_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: stars objects and case_when

Tue, 09/15/2020 - 06:51
Thanks Edzer, I missed that the file name becomes the name of the layer.  This works great.

I should say that I am enjoying converting my raster processing algorithms into the stars package.  It is nice to use tidyverse- and sf-friendly tools.  Thank you (and all your collaborators) for these tools.

My best,

Julian


Edzer Pebesma writes:

> Thank you for pointing to this possibility, I'll add it to the stars
> docs somewhere.
>
> This works but only slightly adapted, e.g. as
>
> out = read_stars(system.file("tif/L7_ETMs.tif", package = "stars")) %>%
>   slice(band, 1) %>%
>   setNames("x") %>%
>   mutate(x = case_when (x<100 ~ NA_real_,   x>100 & x<150 ~ 2))
>
> where:
> - I uses setNames("x") so that the attribute is renamed to "x", and
>   you can use x in the case_when expressions (rather than the lengthy
> "L7_ETMs.tif")
> - I chnaged NA into NA_real_ : the first RHS of the formulas need to
>   be all of the same type; as typeof(NA) is "logical", it breaks on
>  the second RHS which returns a numeric; if you would TRUE or FALSE
> rather than 2 in the second case_when formula, using NA would be the
> right thing to do in the first.
>
> The examples of case_when document the need for typed NA in RHS: this
> is intended behavior.
>
>
> On 9/14/20 4:39 PM, Julian M. Burgos wrote:
>> Dear list,
>> I am wondering if there is a way to use logical statements to
>> replace values of a stars object, for example the case_when function
>> or some other "tidyverse friendly" approach.  For example, I can do
>> something like this:
>> st1 <- read_stars(system.file("tif/L7_ETMs.tif", package = "stars"))
>> %>%
>>    slice(band, 1)
>> st1[st1 < 100] <- NA
>> st1[st1 > 100 & st1 < 150] <- 2
>> ... and so for.  But I am wondering if there is a way to do this as
>> part of a pipe, looking something like this:
>> st1 <- read_stars(system.file("tif/L7_ETMs.tif", package = "stars"))
>> %>%
>>    slice(band, 1) %>%
>>    mutate(x <- case_when (x<100 ~ NA,
>>                           x>100 & x<150 ~ 2))
>> Any ideas?
>> Takk,
>> Julian
>> --
>> Julian Mariano Burgos, PhD
>> Hafrannsóknastofnun, rannsókna- og ráðgjafarstofnun hafs og vatna/
>> Marine and Freshwater Research Institute
>> Botnsjávarsviðs / Demersal Division
>>    Fornubúðir 5, IS-220 Hafnarfjörður, Iceland
>> http://www.hafogvatn.is/
>> Sími/Telephone : +354-5752037
>> Netfang/Email: [hidden email]
>> _______________________________________________
>> R-sig-Geo mailing list
>> [hidden email]
>> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fstat.ethz.ch%2Fmailman%2Flistinfo%2Fr-sig-geo&amp;data=02%7C01%7C%7C46229147e3f34abe731108d858de7a97%7C8e105b94435e4303a61063620dbe162b%7C0%7C1%7C637357059855429791&amp;sdata=rUuCmwP21cqjEr%2BMGcDswcFsG4duKfgkcoNwPSflmeM%3D&amp;reserved=0
>>

--
Julian Mariano Burgos, PhD
Hafrannsóknastofnun, rannsókna- og ráðgjafarstofnun hafs og vatna/
Marine and Freshwater Research Institute
Botnsjávarsviðs / Demersal Division
  Fornubúðir 5, IS-220 Hafnarfjörður, Iceland
www.hafogvatn.is
Sími/Telephone : +354-5752037
Netfang/Email: [hidden email]

_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Doubt about SPLM function

Mon, 09/14/2020 - 15:57
Hello everyone,

Could someone help me with splm (Spatial Panel Model By Maximum Likelihood) in R?

I want to know if is possible to cluster the standard errors by my individuals (like as in plm function). After a lot of research a found that there are more people with the same doubt, you can see this here, the person has the same problem as me:

https://stackoverflow.com/questions/36869932/clustered-standard-errors-in-spatial-panel-linear-models-splm

Thank you all.

[https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-icon@...?v=73d79a89bded]<https://stackoverflow.com/questions/36869932/clustered-standard-errors-in-spatial-panel-linear-models-splm>
r - Clustered Standard Errors in Spatial Panel Linear Models (splm) - Stack Overflow<https://stackoverflow.com/questions/36869932/clustered-standard-errors-in-spatial-panel-linear-models-splm>
I am running spatial panel models (using splm) of census tracts in the U.S. between 1990 and 2010 with neighbor matrices based upon distance or contiguity. But I also want to cluster the standard e...
stackoverflow.com

Pietro Andre Telatin Paschoalino
Doutorando em Ci�ncias Econ�micas da Universidade Estadual de Maring� - PCE.

[https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif]<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon>    Virus-free. www.avast.com<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link>

        [[alternative HTML version deleted]]


_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

"no applicable method" for focal() function in raster

Mon, 09/14/2020 - 15:41
Hi everyone,

I want to apply a moving window (3x3) to estimate conditional entropy
(Nowosad & Stepinsky, 2019) over a heterogeneous landscape:
*entropy=function(r){*




*  entropy=lsm_l_condent(r, neighbourhood = 4, ordered = TRUE, base =
"log2")  return(entropy$value)}w=matrix(1,3,3)result=focal(r, w,
fun=entropy)*

However, I get this error:
*Error in .focal_fun(values(x), w, as.integer(dim(out)), runfun, NAonly) : *
*Evaluation error: no applicable method for 'lsm_l_condent' applied to an
object of class "c('double', 'numeric')".*

But, when I run entropy function in the entire landscape it works:

*> entropy(r)[1] 2.178874*

*r* is a INT4U raster object:






*class      : RasterLayer dimensions : 886, 999, 885114  (nrow, ncol,
ncell)resolution : 300, 300  (x, y)extent     : 934805.7, 1234506, 1006566,
1272366  (xmin, xmax, ymin, ymax)crs        : +proj=tmerc
+lat_0=4.59620041666667 +lon_0=-74.0775079166667 +k=1 +x_0=1000000
+y_0=1000000 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs values
  : 99, 321113  (min, max)*

There is any idea to solve the "no applicable method" error? Or any idea to
estimate conditional entropy applying a moving window?

Thanks in advance for the help.

Best,

Jaime

        [[alternative HTML version deleted]]

_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: stars objects and case_when

Mon, 09/14/2020 - 13:46
Thank you for pointing to this possibility, I'll add it to the stars
docs somewhere.

This works but only slightly adapted, e.g. as

out = read_stars(system.file("tif/L7_ETMs.tif", package = "stars")) %>%
   slice(band, 1) %>%
   setNames("x") %>%
   mutate(x = case_when (x<100 ~ NA_real_,   x>100 & x<150 ~ 2))

where:
- I uses setNames("x") so that the attribute is renamed to "x", and you
can use x in the case_when expressions (rather than the lengthy
"L7_ETMs.tif")
- I chnaged NA into NA_real_ : the first RHS of the formulas need to be
all of the same type; as typeof(NA) is "logical", it breaks on the
second RHS which returns a numeric; if you would TRUE or FALSE rather
than 2 in the second case_when formula, using NA would be the right
thing to do in the first.

The examples of case_when document the need for typed NA in RHS: this is
intended behavior.


On 9/14/20 4:39 PM, Julian M. Burgos wrote:
> Dear list,
>
> I am wondering if there is a way to use logical statements to replace values of a stars object, for example the case_when function or some other "tidyverse friendly" approach.  For example, I can do something like this:
>
> st1 <- read_stars(system.file("tif/L7_ETMs.tif", package = "stars")) %>%
>    slice(band, 1)
>
> st1[st1 < 100] <- NA
> st1[st1 > 100 & st1 < 150] <- 2
>
> ... and so for.  But I am wondering if there is a way to do this as part of a pipe, looking something like this:
>
> st1 <- read_stars(system.file("tif/L7_ETMs.tif", package = "stars")) %>%
>    slice(band, 1) %>%
>    mutate(x <- case_when (x<100 ~ NA,
>                           x>100 & x<150 ~ 2))
>
> Any ideas?
>
> Takk,
>
> Julian
>
> --
> Julian Mariano Burgos, PhD
> Hafrannsóknastofnun, rannsókna- og ráðgjafarstofnun hafs og vatna/
> Marine and Freshwater Research Institute
> Botnsjávarsviðs / Demersal Division
>    Fornubúðir 5, IS-220 Hafnarfjörður, Iceland
> www.hafogvatn.is
> Sími/Telephone : +354-5752037
> Netfang/Email: [hidden email]
>
> _______________________________________________
> R-sig-Geo mailing list
> [hidden email]
> https://stat.ethz.ch/mailman/listinfo/r-sig-geo
>
--
Edzer Pebesma
Institute for Geoinformatics
Heisenbergstrasse 2, 48149 Muenster, Germany
Phone: +49 251 8333081

_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: proj.db can't be found when rgdal is loaded (sometimes)

Mon, 09/14/2020 - 13:45
On Mon, 14 Sep 2020, Christian John via R-sig-Geo wrote:

> Hi there folks,
>
> I am running into problems using sf objects when rgdal has been loaded.
> When I run:

The following is not a reprex. What we need is the output of:

> sf::sf_extSoftVersion()
           GEOS           GDAL         proj.4 GDAL_with_GEOS     USE_PROJ_H
        "3.8.1"        "3.1.3"        "7.1.1"         "true"         "true"
> rgdal::rgdal_extSoftVersion()
           GDAL GDAL_with_GEOS           PROJ             sp
        "3.1.3"         "TRUE"        "7.1.1"        "1.4-4"

(in my case). In your case, the PROJ seen by sf is 7.1.1, by rgdal is
5.2.0. Loading rgdal after sf resets PROJ_LIB to the pre-PROJ 6 version.
You appear to have installed rgdal binary from CRAN, but installed sf
(using homebrew ?) and local GEOS/GDAL/PROJ. Either re-install rgdal to
match, or re-install sf from CRAN as a binary. Installing CRAM MacOS
binaries should ensure coherence. If problems persist, report back (and
perhaps on R-sig-Mac, I have no way to check which GDAL/PROJ versions are
bundled with CAN MacOS binaries.

We are exploring a shared GDAL/PROJ metadata package, but are stuck on the
same problem - users do not take steps to keep their R packages cleanly
built on external software, and package maintainers must expect a
reasonable amount of insight from users.

This list has been given plenty of information that PROJ < 6 and PROJ >= 6
are different universes, so mixing packages using both is not supported.
In this case, sf needs proj.db, but cannot find it once rgdal has been
loaded, pointing PROJ_LIB to a directory without it.

Roger


>
> Sys.getenv("PROJ_LIB")
> library(sf); library(rnaturalearth); library(ggplot2)
> Sys.getenv("PROJ_LIB")
> ROI1 = ne_countries(returnclass = 'sf') %>%
>  st_combine() %>%
>  st_buffer(0.5)  %>%
>  st_wrap_dateline()
> ggplot() + geom_sf(data = ROI1)
> library(rgdal)
> Sys.getenv("PROJ_LIB")
> ROI2 = ne_countries(returnclass = 'sf') %>%
>  st_combine() %>%
>  st_buffer(0.5)  %>%
>  st_wrap_dateline()
> ggplot() + geom_sf(data = ROI2)
>
> everything plots fine. The sf startup message is:
>
> Linking to GEOS 3.8.1, GDAL 3.1.2, PROJ 7.1.1
>
> The PROJ_LIB changes upon loading rgdal from "" to
> "/Library/Frameworks/R.framework/Versions/3.6/Resources/library/rgdal/proj".
> Loading rgdal generates the message:
>
> Loading required package: sp
> rgdal: version: 1.5-16, (SVN revision 1050)
> Geospatial Data Abstraction Library extensions to R successfully loaded
> Loaded GDAL runtime: GDAL 2.4.2, released 2019/06/28
> Path to GDAL shared files:
> /Library/Frameworks/R.framework/Versions/3.6/Resources/library/rgdal/gdal
> GDAL binary built with GEOS: FALSE
> Loaded PROJ runtime: Rel. 5.2.0, September 15th, 2018, [PJ_VERSION: 520]
> Path to PROJ shared files:
> /Library/Frameworks/R.framework/Versions/3.6/Resources/library/rgdal/proj
> Linking to sp version:1.4-2
> Overwritten PROJ_LIB was
> /Library/Frameworks/R.framework/Versions/3.6/Resources/library/rgdal/proj
> Warning message:
> package ‘rgdal’ was built under R version 3.6.2
>
> which explains the update in the PROJ_LIB. Notably, the gdal and proj
> runtime in the rgdal startup message are completely different from the sf
> versions. If I run:
>
> Sys.getenv("PROJ_LIB")
> library(sf); library(rnaturalearth); library(ggplot2)
> Sys.getenv("PROJ_LIB")
> library(rgdal)
> Sys.getenv("PROJ_LIB")
> ROI2 = ne_countries(returnclass = 'sf') %>%
>  st_combine() %>%
>  st_buffer(0.5)  %>%
>  st_wrap_dateline()
> ggplot() + geom_sf(data = ROI2)
>
> which is the same as above, but without the ROI1 generation and plotting, I
> get a stack overflow error upon plotting ROI2. All other messages are the
> same, except when I run the ggplot() line, I get:
>
> Error: node stack overflow
> In addition: There were 50 or more warnings (use warnings() to see the
> first 50)
> Error during wrapup: node stack overflow
>
> Warnings 1:50 are all
>
> 1: In CPL_crs_from_input(x) :
>  GDAL Error 1: PROJ: proj_create_from_database: Cannot find proj.db
>
> So, it seems like loading rgdal before doing anything sf:: related causes
> the issue. Any ideas for troubleshooting? Session info can be found below.
>
> Best,
> Christian
>
>> sessionInfo()
> R version 3.6.1 (2019-07-05)
> Platform: x86_64-apple-darwin15.6.0 (64-bit)
> Running under: macOS Catalina 10.15.6
>
> Matrix products: default
> BLAS:
> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
> LAPACK:
> /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRlapack.dylib
>
> locale:
> [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
>
> attached base packages:
> [1] stats     graphics  grDevices utils     datasets  methods   base
>
> other attached packages:
> [1] rgdal_1.5-16        sp_1.3-2            ggplot2_3.3.0
> rnaturalearth_0.1.0
> [5] sf_0.9-5
>
> loaded via a namespace (and not attached):
> [1] Rcpp_1.0.3         rstudioapi_0.10    magrittr_1.5       units_0.6-5
>
> [5] munsell_0.5.0      tidyselect_0.2.5   colorspace_1.4-1
> lattice_0.20-38
> [9] R6_2.4.1           rlang_0.4.2        dplyr_0.8.3        tools_3.6.1
>
> [13] grid_3.6.1         gtable_0.3.0       KernSmooth_2.23-15 e1071_1.7-3
>
> [17] DBI_1.0.0          withr_2.1.2        rgeos_0.5-5        class_7.3-15
>
> [21] assertthat_0.2.1   lifecycle_0.1.0    tibble_2.1.3       crayon_1.3.4
>
> [25] purrr_0.3.3        glue_1.3.1         compiler_3.6.1     pillar_1.4.3
>
> [29] scales_1.1.0       classInt_0.4-2     pkgconfig_2.0.3
>
> [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-Geo mailing list
> [hidden email]
> https://stat.ethz.ch/mailman/listinfo/r-sig-geo
> --
Roger Bivand
Department of Economics, Norwegian School of Economics,
Helleveien 30, N-5045 Bergen, Norway.
voice: +47 55 95 93 55; e-mail: [hidden email]
https://orcid.org/0000-0003-2392-6140
https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en
_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo
Roger Bivand
Department of Economics
Norwegian School of Economics
Helleveien 30
N-5045 Bergen, Norway

Pages