This is an

**for**__archive__**R-sig-geo**, not a forum. It is not possible to post through Nabble - you may not start a new thread nor follow up an existing thread. If you wish to post, but are not subscribed to the list through the list homepage, subscribe first**(through the list homepage, not via Nabble)**and post from your subscribed email address. Until 2015-06-20, subscribers could post through Nabble, but policy has been changed as too many non-subscribers misunderstood the interface.*Updated:*1 hour 2 min ago

### Re: Problems with gstat gaussian variogram and cross validation when fix nugget at 0

Stefano,

Fixing the nugget at zero for the Gaussian varigoram can often cause numerical problems in kriging. If you have two observations that are close in space, their modelled semivariance will be so small that you get numerical issues with the covariance matrix. In some cases it will be singular, in your case it could be inverted, but the absolute values of the weights will be very high for some of the cross-validation locations. It seems you have two stations that are very close to each other, if you remove one of these, you can see how the results will be more similar (still not the same) to the model with a small nugget effect.

Best,

Jon

--

Jon Olav Skøien

European Commission

Joint Research Centre – JRC.E.1

Disaster Risk Management Unit

Building 26b 1/144 | Via E.Fermi 2749, I-21027 Ispra (VA) Italy, TP 267

[hidden email]

Tel: +39 0332 789205

Disclaimer: Views expressed in this email are those of the individual and do not necessarily represent official views of the European Commission.

________________________________________

From: R-sig-Geo [[hidden email]] on behalf of Stefano Menichetti [[hidden email]]

Sent: 12 October 2018 16:36

To: [hidden email]

Subject: [R-sig-Geo] Problems with gstat gaussian variogram and cross validation when fix nugget at 0

Dear all, I have a question about fixing nugget at 0 in a Gaussian

variogram using *gstat *package

I have encountered a serious problem on a variogram of piezometric data

that give "strange" results if I want to fix nugget to 0.

This is the file and how to import:

/library(gstat)/

//file =>

https://drive.google.com/open?id=1AOilEhR8G8YANVuFIBUcmEg-6UG1Ajrr//

/lay1 =� read.delim(paste(path,*file*,sep=""),sep="\t",dec=",")//

//SpData = na.exclude(subset(lay1, select = c("X","Y","Observed"))) //

//colnames(SpData)=c("X","Y","Z")/

The initial variogram:

/coordinates(SpData) = ~X+Y//

//var = variogram(Z~1,SpData)/

show a slow initial rise and so suggest a *Gaussian *type.

After a first variogram with:

/m = vgm(200,"Gau",3000,0)//

//

//plot(var, model=m, //

//main = paste(m$model[2],"//

//Nug=",m$psill[1],", Sill=",round(m$psill[2]),",

Range=",round(m$range[2]),sep ="")//

//)/

I have obtained a fitted variogram like this,

/m = fit.variogram(var, m)/

with a reasonable cross-validation results:

/krg_cv = krige.cv(Z~1, SpData, model = m)//

//plot(krg_cv$observed,krg_cv$var1.pred, xlab="Z observed", ylab="Z

predicted")//

//abline(a=0,b=1)//

//paste("media residui =",mean(krg_cv$residual))//

//paste("RMS residui =",mean(krg_cv$residual^2)^0.5)/

Resulting map it is very smooth, despite the small, it seems, nugget and

so *I have tried to fix nugget at 0* (exact interpolation).

But cross validation results with

/m = vgm(216,"Gau",3640,*0*)/

are dramatically wrong and also very different from cv results with

/m = vgm(216,"Gau",3640,*1e-3*)/

What happens ? I wrong in something ? Why nugget paraneter at 0 or at

small value influnce so hard the results ?

Thank you very much in advance,

Stefano

Dott. Geol. Stefano Menichetti

=============================================================

ARPAT - Agenzia per la Protezione dell'Ambiente della Toscana

Settore Tecnico SIRA - Sistema Informativo Regionale Ambientale via Nicola Porpora 22 - 50144 Firenze

tel. 0553206333 cell. 3668217978 (3383550147) fax 0553206410

[hidden email] http:\\www.arpat.toscana.it http:\\sira.arpat.toscana.it

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Fixing the nugget at zero for the Gaussian varigoram can often cause numerical problems in kriging. If you have two observations that are close in space, their modelled semivariance will be so small that you get numerical issues with the covariance matrix. In some cases it will be singular, in your case it could be inverted, but the absolute values of the weights will be very high for some of the cross-validation locations. It seems you have two stations that are very close to each other, if you remove one of these, you can see how the results will be more similar (still not the same) to the model with a small nugget effect.

Best,

Jon

--

Jon Olav Skøien

European Commission

Joint Research Centre – JRC.E.1

Disaster Risk Management Unit

Building 26b 1/144 | Via E.Fermi 2749, I-21027 Ispra (VA) Italy, TP 267

[hidden email]

Tel: +39 0332 789205

Disclaimer: Views expressed in this email are those of the individual and do not necessarily represent official views of the European Commission.

________________________________________

From: R-sig-Geo [[hidden email]] on behalf of Stefano Menichetti [[hidden email]]

Sent: 12 October 2018 16:36

To: [hidden email]

Subject: [R-sig-Geo] Problems with gstat gaussian variogram and cross validation when fix nugget at 0

Dear all, I have a question about fixing nugget at 0 in a Gaussian

variogram using *gstat *package

I have encountered a serious problem on a variogram of piezometric data

that give "strange" results if I want to fix nugget to 0.

This is the file and how to import:

/library(gstat)/

//file =>

https://drive.google.com/open?id=1AOilEhR8G8YANVuFIBUcmEg-6UG1Ajrr//

/lay1 =� read.delim(paste(path,*file*,sep=""),sep="\t",dec=",")//

//SpData = na.exclude(subset(lay1, select = c("X","Y","Observed"))) //

//colnames(SpData)=c("X","Y","Z")/

The initial variogram:

/coordinates(SpData) = ~X+Y//

//var = variogram(Z~1,SpData)/

show a slow initial rise and so suggest a *Gaussian *type.

After a first variogram with:

/m = vgm(200,"Gau",3000,0)//

//

//plot(var, model=m, //

//main = paste(m$model[2],"//

//Nug=",m$psill[1],", Sill=",round(m$psill[2]),",

Range=",round(m$range[2]),sep ="")//

//)/

I have obtained a fitted variogram like this,

/m = fit.variogram(var, m)/

with a reasonable cross-validation results:

/krg_cv = krige.cv(Z~1, SpData, model = m)//

//plot(krg_cv$observed,krg_cv$var1.pred, xlab="Z observed", ylab="Z

predicted")//

//abline(a=0,b=1)//

//paste("media residui =",mean(krg_cv$residual))//

//paste("RMS residui =",mean(krg_cv$residual^2)^0.5)/

Resulting map it is very smooth, despite the small, it seems, nugget and

so *I have tried to fix nugget at 0* (exact interpolation).

But cross validation results with

/m = vgm(216,"Gau",3640,*0*)/

are dramatically wrong and also very different from cv results with

/m = vgm(216,"Gau",3640,*1e-3*)/

What happens ? I wrong in something ? Why nugget paraneter at 0 or at

small value influnce so hard the results ?

Thank you very much in advance,

Stefano

Dott. Geol. Stefano Menichetti

=============================================================

ARPAT - Agenzia per la Protezione dell'Ambiente della Toscana

Settore Tecnico SIRA - Sistema Informativo Regionale Ambientale via Nicola Porpora 22 - 50144 Firenze

tel. 0553206333 cell. 3668217978 (3383550147) fax 0553206410

[hidden email] http:\\www.arpat.toscana.it http:\\sira.arpat.toscana.it

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Problems with gstat gaussian variogram and cross validation when fix nugget at 0

Dear all, I have a question about fixing nugget at 0 in a Gaussian

variogram using *gstat *package

I have encountered a serious problem on a variogram of piezometric data

that give "strange" results if I want to fix nugget to 0.

This is the file and how to import:

/library(gstat)/

//file =>

https://drive.google.com/open?id=1AOilEhR8G8YANVuFIBUcmEg-6UG1Ajrr//

/lay1 =� read.delim(paste(path,*file*,sep=""),sep="\t",dec=",")//

//SpData = na.exclude(subset(lay1, select = c("X","Y","Observed"))) //

//colnames(SpData)=c("X","Y","Z")/

The initial variogram:

/coordinates(SpData) = ~X+Y//

//var = variogram(Z~1,SpData)/

show a slow initial rise and so suggest a *Gaussian *type.

After a first variogram with:

/m = vgm(200,"Gau",3000,0)//

//

//plot(var, model=m, //

//main = paste(m$model[2],"//

//Nug=",m$psill[1],", Sill=",round(m$psill[2]),",

Range=",round(m$range[2]),sep ="")//

//)/

I have obtained a fitted variogram like this,

/m = fit.variogram(var, m)/

with a reasonable cross-validation results:

/krg_cv = krige.cv(Z~1, SpData, model = m)//

//plot(krg_cv$observed,krg_cv$var1.pred, xlab="Z observed", ylab="Z

predicted")//

//abline(a=0,b=1)//

//paste("media residui =",mean(krg_cv$residual))//

//paste("RMS residui =",mean(krg_cv$residual^2)^0.5)/

Resulting map it is very smooth, despite the small, it seems, nugget and

so *I have tried to fix nugget at 0* (exact interpolation).

But cross validation results with

/m = vgm(216,"Gau",3640,*0*)/

are dramatically wrong and also very different from cv results with

/m = vgm(216,"Gau",3640,*1e-3*)/

What happens ? I wrong in something ? Why nugget paraneter at 0 or at

small value influnce so hard the results ?

Thank you very much in advance,

Stefano

Dott. Geol. Stefano Menichetti

=============================================================

ARPAT - Agenzia per la Protezione dell'Ambiente della Toscana

Settore Tecnico SIRA - Sistema Informativo Regionale Ambientale via Nicola Porpora 22 - 50144 Firenze

tel. 0553206333 cell. 3668217978 (3383550147) fax 0553206410

[hidden email] http:\\www.arpat.toscana.it http:\\sira.arpat.toscana.it

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

variogram using *gstat *package

I have encountered a serious problem on a variogram of piezometric data

that give "strange" results if I want to fix nugget to 0.

This is the file and how to import:

/library(gstat)/

//file =>

https://drive.google.com/open?id=1AOilEhR8G8YANVuFIBUcmEg-6UG1Ajrr//

/lay1 =� read.delim(paste(path,*file*,sep=""),sep="\t",dec=",")//

//SpData = na.exclude(subset(lay1, select = c("X","Y","Observed"))) //

//colnames(SpData)=c("X","Y","Z")/

The initial variogram:

/coordinates(SpData) = ~X+Y//

//var = variogram(Z~1,SpData)/

show a slow initial rise and so suggest a *Gaussian *type.

After a first variogram with:

/m = vgm(200,"Gau",3000,0)//

//

//plot(var, model=m, //

//main = paste(m$model[2],"//

//Nug=",m$psill[1],", Sill=",round(m$psill[2]),",

Range=",round(m$range[2]),sep ="")//

//)/

I have obtained a fitted variogram like this,

/m = fit.variogram(var, m)/

with a reasonable cross-validation results:

/krg_cv = krige.cv(Z~1, SpData, model = m)//

//plot(krg_cv$observed,krg_cv$var1.pred, xlab="Z observed", ylab="Z

predicted")//

//abline(a=0,b=1)//

//paste("media residui =",mean(krg_cv$residual))//

//paste("RMS residui =",mean(krg_cv$residual^2)^0.5)/

Resulting map it is very smooth, despite the small, it seems, nugget and

so *I have tried to fix nugget at 0* (exact interpolation).

But cross validation results with

/m = vgm(216,"Gau",3640,*0*)/

are dramatically wrong and also very different from cv results with

/m = vgm(216,"Gau",3640,*1e-3*)/

What happens ? I wrong in something ? Why nugget paraneter at 0 or at

small value influnce so hard the results ?

Thank you very much in advance,

Stefano

Dott. Geol. Stefano Menichetti

=============================================================

ARPAT - Agenzia per la Protezione dell'Ambiente della Toscana

Settore Tecnico SIRA - Sistema Informativo Regionale Ambientale via Nicola Porpora 22 - 50144 Firenze

tel. 0553206333 cell. 3668217978 (3383550147) fax 0553206410

[hidden email] http:\\www.arpat.toscana.it http:\\sira.arpat.toscana.it

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### addImageQuery() result stops working in shiny app

Hi all

I've put together a shiny app whose ultimate use would be to share

MODIS-Aqua based estimates of chlorophyll-a concentration in RasterLayer

form.

I would like the user to be able to select from a list of layers that are

in a stack and once that layer is displayed I'd like them to be able to see

the RasterLayer value wherever the cursor is located.

I have no problems make the shiny app do this consistently for a single

RasterLayer. However, once the user switches to a RasterLayer other than

the default that is shown when you open the app, the value at the cursor is

no longer displayed.

Does anyone know if there is a way to change this behavior by changing my

code? I am guessing that because of my location on the learning curve that

my code is the issue. Somehow I think I have failed to correctly relay the

change to addImageQuery().

The code below should be a reproducible example. It may not be an ideal

example, but it is a self-contained example that will only require that the

user has the required packages. And it is the best I know how to do.

Thanks in advance to anyone who reads this!

#The example....

library(raster)

library(leaflet)

library(shiny)

library(mapview)

# Create raster data

# Each raster represents the average of multiple rasters

# during a weekly period.

# In this example, there are five weeks represented

# create an extent object

myext <- extent(707900, 980000,540000,1100000)

mycrs <- "+proj=aea +lat_1=42.122774 +lat_2=49.01518 +lat_0=45.568977

+lon_0=-84.455955 +x_0=1000000 +y_0=1000000 +ellps=GRS80

+towgs84=0,0,0,0,0,0,0 +units=m +no_defs"

r1 <- raster(ncol=50, nrow=75, ext=myext, crs=mycrs)

values(r1) <-rnorm(3750, 0, 2)

r2 <- raster(ncol=50, nrow=75, ext=myext, crs=mycrs)

values(r2) <-rnorm(3750, 0, 2)

r3 <- raster(ncol=50, nrow=75, ext=myext, crs=mycrs)

values(r3) <-rnorm(3750, 0, 2)

r4 <- raster(ncol=50, nrow=75, ext=myext, crs=mycrs)

values(r4) <-rnorm(3750, 0, 2)

r5 <- raster(ncol=50, nrow=75, ext=myext, crs=mycrs)

values(r5) <-rnorm(3750, 0, 2)

# create list of rasters that the user can choose from in the shiny app

myras <- list(r1, r2, r3, r4, r5)

modis.rasters <- stack(myras)

# set up color display

# #this sets up the color palette and is the reverse Spectral with 10 levels

my.max<- 10

x <- -10:my.max # this is the observed range for chlorophyll in the data

names(modis.rasters) <- c("Week of 2016-04-01", "Week of 2016-04-08",

"Week of 2016-04-15", "Week of 2016-04-22", "Week of

2016-04-29")

pal1 <- colorNumeric(palette = c("#5E4FA2", "#3288BD", "#66C2A5",

"#ABDDA4",

"#E6F598", "#FEE08B", "#FDAE61",

"#F46D43", "#D53E4F", "#9E0142" ), domain =

x,na.color = "transparent")

# Create a map for use in shiny app

map <- leaflet() %>% addTiles() %>%

setView(lng = -86.0589, lat = 43, zoom =7) %>%

addLegend(pal=pal1, values = values(modis.rasters),

title ='Random normal variate (mean=0, SD=2)',

position="bottomleft",

opacity=1)%>%

addMouseCoordinates(style = "basic")

# Now set up the UI

ui <- shinyUI(fluidPage(

titlePanel("Stuff"),

# Generate a row with a sidebar

sidebarLayout(

# Define the sidebar with one input

# Here "period" is a weekly time period/raster

sidebarPanel(

selectInput("period", "Choose a time period:",

choices=names(modis.rasters)),

hr(),

helpText("Some raster data that I will replace.",

br(),

width=8)

),

# Create a spot for the map

mainPanel(leafletOutput('raster_map', width=800,height=900))

)

)

)

# Define a server for the Shiny app

server <- shinyServer(function(input, output){

# Fill in the spot we created for a map

output$raster_map = renderLeaflet({

map %>%

addRasterImage(reactiveRaster(), colors=pal1, layerId =input$period,

opacity=0.5)%>%

addImageQuery(reactiveRaster(), type="mousemove", digits=2,

position="topright", layerId=input$period)

})

reactiveRaster <- reactive({modis.rasters[[input$period]]})

# add the selected raster to the map

observe({

leafletProxy("raster_map") %>%

clearImages() %>%

addRasterImage(reactiveRaster(), colors=pal1, layerId =input$period,

opacity=0.5)

})

})

shinyApp(ui = ui, server = server)

--

David Warner

Research Fisheries Biologist

U.S.G.S. Great Lakes Science Center

1451 Green Road

Ann Arbor, MI 48105

734-214-9392

orcid.org/0000-0003-4939-5368

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

I've put together a shiny app whose ultimate use would be to share

MODIS-Aqua based estimates of chlorophyll-a concentration in RasterLayer

form.

I would like the user to be able to select from a list of layers that are

in a stack and once that layer is displayed I'd like them to be able to see

the RasterLayer value wherever the cursor is located.

I have no problems make the shiny app do this consistently for a single

RasterLayer. However, once the user switches to a RasterLayer other than

the default that is shown when you open the app, the value at the cursor is

no longer displayed.

Does anyone know if there is a way to change this behavior by changing my

code? I am guessing that because of my location on the learning curve that

my code is the issue. Somehow I think I have failed to correctly relay the

change to addImageQuery().

The code below should be a reproducible example. It may not be an ideal

example, but it is a self-contained example that will only require that the

user has the required packages. And it is the best I know how to do.

Thanks in advance to anyone who reads this!

#The example....

library(raster)

library(leaflet)

library(shiny)

library(mapview)

# Create raster data

# Each raster represents the average of multiple rasters

# during a weekly period.

# In this example, there are five weeks represented

# create an extent object

myext <- extent(707900, 980000,540000,1100000)

mycrs <- "+proj=aea +lat_1=42.122774 +lat_2=49.01518 +lat_0=45.568977

+lon_0=-84.455955 +x_0=1000000 +y_0=1000000 +ellps=GRS80

+towgs84=0,0,0,0,0,0,0 +units=m +no_defs"

r1 <- raster(ncol=50, nrow=75, ext=myext, crs=mycrs)

values(r1) <-rnorm(3750, 0, 2)

r2 <- raster(ncol=50, nrow=75, ext=myext, crs=mycrs)

values(r2) <-rnorm(3750, 0, 2)

r3 <- raster(ncol=50, nrow=75, ext=myext, crs=mycrs)

values(r3) <-rnorm(3750, 0, 2)

r4 <- raster(ncol=50, nrow=75, ext=myext, crs=mycrs)

values(r4) <-rnorm(3750, 0, 2)

r5 <- raster(ncol=50, nrow=75, ext=myext, crs=mycrs)

values(r5) <-rnorm(3750, 0, 2)

# create list of rasters that the user can choose from in the shiny app

myras <- list(r1, r2, r3, r4, r5)

modis.rasters <- stack(myras)

# set up color display

# #this sets up the color palette and is the reverse Spectral with 10 levels

my.max<- 10

x <- -10:my.max # this is the observed range for chlorophyll in the data

names(modis.rasters) <- c("Week of 2016-04-01", "Week of 2016-04-08",

"Week of 2016-04-15", "Week of 2016-04-22", "Week of

2016-04-29")

pal1 <- colorNumeric(palette = c("#5E4FA2", "#3288BD", "#66C2A5",

"#ABDDA4",

"#E6F598", "#FEE08B", "#FDAE61",

"#F46D43", "#D53E4F", "#9E0142" ), domain =

x,na.color = "transparent")

# Create a map for use in shiny app

map <- leaflet() %>% addTiles() %>%

setView(lng = -86.0589, lat = 43, zoom =7) %>%

addLegend(pal=pal1, values = values(modis.rasters),

title ='Random normal variate (mean=0, SD=2)',

position="bottomleft",

opacity=1)%>%

addMouseCoordinates(style = "basic")

# Now set up the UI

ui <- shinyUI(fluidPage(

titlePanel("Stuff"),

# Generate a row with a sidebar

sidebarLayout(

# Define the sidebar with one input

# Here "period" is a weekly time period/raster

sidebarPanel(

selectInput("period", "Choose a time period:",

choices=names(modis.rasters)),

hr(),

helpText("Some raster data that I will replace.",

br(),

width=8)

),

# Create a spot for the map

mainPanel(leafletOutput('raster_map', width=800,height=900))

)

)

)

# Define a server for the Shiny app

server <- shinyServer(function(input, output){

# Fill in the spot we created for a map

output$raster_map = renderLeaflet({

map %>%

addRasterImage(reactiveRaster(), colors=pal1, layerId =input$period,

opacity=0.5)%>%

addImageQuery(reactiveRaster(), type="mousemove", digits=2,

position="topright", layerId=input$period)

})

reactiveRaster <- reactive({modis.rasters[[input$period]]})

# add the selected raster to the map

observe({

leafletProxy("raster_map") %>%

clearImages() %>%

addRasterImage(reactiveRaster(), colors=pal1, layerId =input$period,

opacity=0.5)

})

})

shinyApp(ui = ui, server = server)

--

David Warner

Research Fisheries Biologist

U.S.G.S. Great Lakes Science Center

1451 Green Road

Ann Arbor, MI 48105

734-214-9392

orcid.org/0000-0003-4939-5368

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: Clustering Sd Error in Spatial Models.

On Tue, 9 Oct 2018, Amir Neto wrote:

> Hello,

>

> I am currently working on a project estimating a spatial panel model.

> Because I also estimate non-spatial models I am computing the clustered

> standard errors following Stock and Watson (2008).

>

> I tried to do the same for my spatial models however I am running into the

> some errors (depending if I bootstrap or not my clustered

> variance-covariance matrix). Below is a reproducible example.

Although it would be desirable that spatial models matched the

lmtest/sandwich framework better, are Conway standard errors an

alternative? See this interesting blog post correcting an implementation:

https://darinchristensen.com/post/2017-08-21-correction/

https://darinchristensen.com/post/2015-08-30-conley/

https://github.com/darinchristensen/conley-sehttps://github.com/darinchristensen/conley-se

Hope this helps,

Roger

>

>

> #### Example

>> data(Produc, package = "plm")

>> data(usaww)

>>

>> fm <- log(gsp) ~ log(pcap) + log(pc) + log(emp) + unemp

>>

>> fespaterr <- spml(fm, data = Produc, listw = mat2listw(usaww),

> + model="within", spatial.error="kkp")

>>

>> library(lmtest)

>> library(sandwich)

>>

>> vcov_test <- vcov_test <- vcovCL(fespaterr, cluster = Produc$state)

> Error in UseMethod("estfun") :

> no applicable method for 'estfun' applied to an object of class "splm"

>

>> vcov_boot <- vcovBS(fespaterr, cluster = Produc$state, R=250)

> Error in terms.default(x) : no terms component nor attribute

> Error in terms.default(x) : no terms component nor attribute

> Error in terms.default(x) : no terms component nor attributebute

>

> #############

> ### Non spatial

> library(lfe)

>

> fe_cluster <- felm(log(gsp) ~ log(pcap) + log(pc) + log(emp) +

> unemp|year+state|0|state,data=Produc)

>

>> summary(fe)

>

> Call:

> felm(formula = log(gsp) ~ log(pcap) + log(pc) + log(emp) + unemp |

> year + state | 0 | state, data = Produc)

>

> Residuals:

> Min 1Q Median 3Q Max

> -0.160369 -0.018026 -0.000859 0.016745 0.170752

>

> Coefficients:

> Estimate Cluster s.e. t value Pr(>|t|)

> log(pcap) -0.030176 0.060042 -0.503 0.6154

> log(pc) 0.168828 0.088331 1.911 0.0563 .

> log(emp) 0.769306 0.087700 8.772 <2e-16 ***

> unemp -0.004221 0.003294 -1.281 0.2005

> ---

> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

> ##########

>

> My questions are:

> 1) Is it possible to cluster the variance-covariance matrix on spatial

> models?

> 2) If so, what is the correct procedure?

>

>

> Thank you for your help,

> Amir

> --

> Amir B Ferreira Neto

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

> --

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

http://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

> Hello,

>

> I am currently working on a project estimating a spatial panel model.

> Because I also estimate non-spatial models I am computing the clustered

> standard errors following Stock and Watson (2008).

>

> I tried to do the same for my spatial models however I am running into the

> some errors (depending if I bootstrap or not my clustered

> variance-covariance matrix). Below is a reproducible example.

Although it would be desirable that spatial models matched the

lmtest/sandwich framework better, are Conway standard errors an

alternative? See this interesting blog post correcting an implementation:

https://darinchristensen.com/post/2017-08-21-correction/

https://darinchristensen.com/post/2015-08-30-conley/

https://github.com/darinchristensen/conley-sehttps://github.com/darinchristensen/conley-se

Hope this helps,

Roger

>

>

> #### Example

>> data(Produc, package = "plm")

>> data(usaww)

>>

>> fm <- log(gsp) ~ log(pcap) + log(pc) + log(emp) + unemp

>>

>> fespaterr <- spml(fm, data = Produc, listw = mat2listw(usaww),

> + model="within", spatial.error="kkp")

>>

>> library(lmtest)

>> library(sandwich)

>>

>> vcov_test <- vcov_test <- vcovCL(fespaterr, cluster = Produc$state)

> Error in UseMethod("estfun") :

> no applicable method for 'estfun' applied to an object of class "splm"

>

>> vcov_boot <- vcovBS(fespaterr, cluster = Produc$state, R=250)

> Error in terms.default(x) : no terms component nor attribute

> Error in terms.default(x) : no terms component nor attribute

> Error in terms.default(x) : no terms component nor attributebute

>

> #############

> ### Non spatial

> library(lfe)

>

> fe_cluster <- felm(log(gsp) ~ log(pcap) + log(pc) + log(emp) +

> unemp|year+state|0|state,data=Produc)

>

>> summary(fe)

>

> Call:

> felm(formula = log(gsp) ~ log(pcap) + log(pc) + log(emp) + unemp |

> year + state | 0 | state, data = Produc)

>

> Residuals:

> Min 1Q Median 3Q Max

> -0.160369 -0.018026 -0.000859 0.016745 0.170752

>

> Coefficients:

> Estimate Cluster s.e. t value Pr(>|t|)

> log(pcap) -0.030176 0.060042 -0.503 0.6154

> log(pc) 0.168828 0.088331 1.911 0.0563 .

> log(emp) 0.769306 0.087700 8.772 <2e-16 ***

> unemp -0.004221 0.003294 -1.281 0.2005

> ---

> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

> ##########

>

> My questions are:

> 1) Is it possible to cluster the variance-covariance matrix on spatial

> models?

> 2) If so, what is the correct procedure?

>

>

> Thank you for your help,

> Amir

> --

> Amir B Ferreira Neto

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

> --

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

http://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

### Re: spgm

On Wed, 3 Oct 2018, Roger Bivand wrote:

> Please provide a reproducible example using the Produc dataset, so that

> it is clearer what the problem is. I do not know that there is any

> literature on such impacts, please provide references.

My cut at an example:

library(splm)

data(Produc, package="plm")

data(usaww)

matrix1 <- kronecker(diag(length(unique(Produc$year))), usaww)

listw1 <- mat2listw(matrix1, style="W")

tr <- trW(as(listw1, "CsparseMatrix"), m=100)

GM <- spgm(fm, data=Produc, listw = usaww, moments="within", spatial.error

= FALSE, lag = TRUE)

impacts(GM, listw=listw1)

impacts(GM, tr=tr)

GMi <- spgm(update(fm, . ~ . - unemp), data=Produc, listw = usaww,

moments="within", spatial.error = FALSE, lag = TRUE, endog = ~ unemp,

instruments = ~ hwy + water + util)

summary(GMi)

GMii <- spgm(update(fm, . ~ . - unemp - log(emp)), data=Produc, listw =

usaww, moments="within", spatial.error = FALSE, lag = TRUE, endog = ~

unemp + log(emp), instruments = ~ hwy + water + util)

summary(GMii)

The summary objects show the coefficients, etc. for endogeneous variables

first, before the spatial coefficient. spdep::impacts.stsls() expects the

spatial coefficient listed before the variables (dropping the intercept).

If you use:

put_endog_last <- function(stsls) {

if (is.null(stsls$endog)) stop("no endogenous variables in fitted

model")

n_endog <- length(all.vars(stsls$endog))

coefs <- stsls$coefficients

n_coefs <- length(coefs)

flip <- c((n_endog+1):n_coefs, 1:n_endog)

stsls$coefficients <- coefs[flip]

stsls$var <- stsls$var[flip, flip]

stsls

}

to flip the orderings in the splm::spgm() output object, impacts can be

calculated using the spdep::impacts.stsls() method. I've added the splm

maintainer to CC - I'm unsure whether other functions in splm (or other

argument settings when lag = TRUE) would be affected by patching

splm::spgm() itself.

Hope this helps,

Roger

>

> Roger

>

> Roger Bivand

> Norwegian School of Economics

> Bergen, Norway

>

>

>

>

> On Wed, Oct 3, 2018 at 11:21 AM +0200, "Hulényi Martin" <[hidden email]<mailto:[hidden email]>> wrote:

>

>

> Thank you very much !

> I have one more question regarding the output. I have also one endogenous variable in the model. Your code worked, but it did not show me the indirect and direct effects for the

> endogenous varibale. Here is my regex:

> spd_01 <- spgm(gdppcgr~lefpayr+lpopgr+linvr+lagwgipca + laglgdppc,

> data=esifpdata, listw=dm1.lw,

> model="within", lag=TRUE, spatial.error= FALSE, endog = ~ lefpayr,

> instruments=~areaprop,

> method="w2sls")

> matrix1 <- kronecker(diag(length(unique(esifpdata$years))), dm1)

> listw1 <- mat2listw(matrix1, style="W")

> tr <- trW(as(listw1, "CsparseMatrix"), m=100)

> impacts(spd_01, listw=listw1)

> impacts(spd_01, tr=tr)

> summary(impacts(spd_01, tr=tr, R=1000), zstats=TRUE, short=TRUE)

>

> Best,

>

> MArtin Hulényi

>

>

> ________________________________________

> Od: Roger Bivand

> Odoslané: 29. septembra 2018 14:52

> Komu: Hulényi Martin

> Kópia: [hidden email]

> Predmet: Re: [R-sig-Geo] spgm

>

> On Sat, 29 Sep 2018, Hulényi Martin wrote:

>

>> Dear all,

>>

>>

>> I would like to ask if there is a possibility to apply something

>> similiar to the "impacts" from spdep package for SAR regressions using

>> the spgm function from the splm package.

>>

>

> A reprex would have helped. Here is mine:

>

> data(Produc, package = "plm")

> data(usaww) # dense row-standardised weights matrix

> GM <- spgm(log(gsp)~log(pcap)+log(pc)+log(emp)+unemp, data=Produc,

> listw = usaww, moments="fullweights", lag=TRUE, spatial.error = FALSE)

> class(GM)

> ?impacts.stsls # spdep method for stsls objects

> head(Produc)

> length(unique(Produc$year)) # T

> big <- kronecker(diag(length(unique(Produc$year))), usaww)

> listw <- mat2listw(big, style="W")

> tr <- trW(as(listw, "CsparseMatrix"), m=100)

> impacts(GM, listw=listw)

> impacts(GM, tr=tr)

> summary(impacts(GM, tr=tr, R=1000), zstats=TRUE, short=TRUE)

>

> The splm:::impacts.splm() method cannot dispatch on stsls objects, so they

> try to use the spdep:::impacts.stsls() method, but there the data rows are

> n x T but listw is only of n rows. Looking inside splm:::impacts.splm(),

> you see that a sparse kronecker product matrix is constructed - either do

> the same if your n x T is large, or follow the above using a dense

> kronecker product and cast back to listw representation to create the

> trace vector.

>

> Hope this clarifies,

>

> Roger

>

>>

>> Best regards,

>>

>>

>> Martin Hul???nyi ?

>>

>>

>> [eco.jpg] Pred vytla???en???m tohto mailu zv???te pros???m vplyv na ???ivotn??? prostredie. ???akujeme.

>> Please consider the environment before printing this e-mail. Thanks

>>

>> [[alternative HTML version deleted]]

>>

>>

>

> --

> Roger Bivand

> Department of Economics, Norwegian School of Economics,

> Helleveien 30, N-5045 Bergen, Norway.

> voice: +47 55 95 93 55; e-mail: [hidden email]

> http://orcid.org/0000-0003-2392-6140

> https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

> [eco.jpg] Pred vytlačením tohto mailu zvážte prosím vplyv na životné prostredie. Ďakujeme.

> Please consider the environment before printing this e-mail. Thanks

>

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

http://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

> Please provide a reproducible example using the Produc dataset, so that

> it is clearer what the problem is. I do not know that there is any

> literature on such impacts, please provide references.

My cut at an example:

library(splm)

data(Produc, package="plm")

data(usaww)

matrix1 <- kronecker(diag(length(unique(Produc$year))), usaww)

listw1 <- mat2listw(matrix1, style="W")

tr <- trW(as(listw1, "CsparseMatrix"), m=100)

GM <- spgm(fm, data=Produc, listw = usaww, moments="within", spatial.error

= FALSE, lag = TRUE)

impacts(GM, listw=listw1)

impacts(GM, tr=tr)

GMi <- spgm(update(fm, . ~ . - unemp), data=Produc, listw = usaww,

moments="within", spatial.error = FALSE, lag = TRUE, endog = ~ unemp,

instruments = ~ hwy + water + util)

summary(GMi)

GMii <- spgm(update(fm, . ~ . - unemp - log(emp)), data=Produc, listw =

usaww, moments="within", spatial.error = FALSE, lag = TRUE, endog = ~

unemp + log(emp), instruments = ~ hwy + water + util)

summary(GMii)

The summary objects show the coefficients, etc. for endogeneous variables

first, before the spatial coefficient. spdep::impacts.stsls() expects the

spatial coefficient listed before the variables (dropping the intercept).

If you use:

put_endog_last <- function(stsls) {

if (is.null(stsls$endog)) stop("no endogenous variables in fitted

model")

n_endog <- length(all.vars(stsls$endog))

coefs <- stsls$coefficients

n_coefs <- length(coefs)

flip <- c((n_endog+1):n_coefs, 1:n_endog)

stsls$coefficients <- coefs[flip]

stsls$var <- stsls$var[flip, flip]

stsls

}

to flip the orderings in the splm::spgm() output object, impacts can be

calculated using the spdep::impacts.stsls() method. I've added the splm

maintainer to CC - I'm unsure whether other functions in splm (or other

argument settings when lag = TRUE) would be affected by patching

splm::spgm() itself.

Hope this helps,

Roger

>

> Roger

>

> Roger Bivand

> Norwegian School of Economics

> Bergen, Norway

>

>

>

>

> On Wed, Oct 3, 2018 at 11:21 AM +0200, "Hulényi Martin" <[hidden email]<mailto:[hidden email]>> wrote:

>

>

> Thank you very much !

> I have one more question regarding the output. I have also one endogenous variable in the model. Your code worked, but it did not show me the indirect and direct effects for the

> endogenous varibale. Here is my regex:

> spd_01 <- spgm(gdppcgr~lefpayr+lpopgr+linvr+lagwgipca + laglgdppc,

> data=esifpdata, listw=dm1.lw,

> model="within", lag=TRUE, spatial.error= FALSE, endog = ~ lefpayr,

> instruments=~areaprop,

> method="w2sls")

> matrix1 <- kronecker(diag(length(unique(esifpdata$years))), dm1)

> listw1 <- mat2listw(matrix1, style="W")

> tr <- trW(as(listw1, "CsparseMatrix"), m=100)

> impacts(spd_01, listw=listw1)

> impacts(spd_01, tr=tr)

> summary(impacts(spd_01, tr=tr, R=1000), zstats=TRUE, short=TRUE)

>

> Best,

>

> MArtin Hulényi

>

>

> ________________________________________

> Od: Roger Bivand

> Odoslané: 29. septembra 2018 14:52

> Komu: Hulényi Martin

> Kópia: [hidden email]

> Predmet: Re: [R-sig-Geo] spgm

>

> On Sat, 29 Sep 2018, Hulényi Martin wrote:

>

>> Dear all,

>>

>>

>> I would like to ask if there is a possibility to apply something

>> similiar to the "impacts" from spdep package for SAR regressions using

>> the spgm function from the splm package.

>>

>

> A reprex would have helped. Here is mine:

>

> data(Produc, package = "plm")

> data(usaww) # dense row-standardised weights matrix

> GM <- spgm(log(gsp)~log(pcap)+log(pc)+log(emp)+unemp, data=Produc,

> listw = usaww, moments="fullweights", lag=TRUE, spatial.error = FALSE)

> class(GM)

> ?impacts.stsls # spdep method for stsls objects

> head(Produc)

> length(unique(Produc$year)) # T

> big <- kronecker(diag(length(unique(Produc$year))), usaww)

> listw <- mat2listw(big, style="W")

> tr <- trW(as(listw, "CsparseMatrix"), m=100)

> impacts(GM, listw=listw)

> impacts(GM, tr=tr)

> summary(impacts(GM, tr=tr, R=1000), zstats=TRUE, short=TRUE)

>

> The splm:::impacts.splm() method cannot dispatch on stsls objects, so they

> try to use the spdep:::impacts.stsls() method, but there the data rows are

> n x T but listw is only of n rows. Looking inside splm:::impacts.splm(),

> you see that a sparse kronecker product matrix is constructed - either do

> the same if your n x T is large, or follow the above using a dense

> kronecker product and cast back to listw representation to create the

> trace vector.

>

> Hope this clarifies,

>

> Roger

>

>>

>> Best regards,

>>

>>

>> Martin Hul???nyi ?

>>

>>

>> [eco.jpg] Pred vytla???en???m tohto mailu zv???te pros???m vplyv na ???ivotn??? prostredie. ???akujeme.

>> Please consider the environment before printing this e-mail. Thanks

>>

>> [[alternative HTML version deleted]]

>>

>>

>

> --

> Roger Bivand

> Department of Economics, Norwegian School of Economics,

> Helleveien 30, N-5045 Bergen, Norway.

> voice: +47 55 95 93 55; e-mail: [hidden email]

> http://orcid.org/0000-0003-2392-6140

> https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

> [eco.jpg] Pred vytlačením tohto mailu zvážte prosím vplyv na životné prostredie. Ďakujeme.

> Please consider the environment before printing this e-mail. Thanks

>

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

http://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

### Re: Error when performing a spatial eigenvector selection in R

On Wed, 10 Oct 2018, DIEGO CEPEDA GOMEZ wrote:

> Dear all,

>

> I am currently trying to do a Gaussian linear regression in R with data

> that may be spatially autocorrelated. My dataset contains geographic

> coordinates (value of longitude, value of latitude), species, independent

> variables (BS and LTS) and some explanatory variables; it looks like:

>

> head(dataset)

>

> coordinates SPECIES BS LTS DEPTH OCEAN(155, 47)

> Cristaphyes abyssorum 8.66 28.3 5373 WPac(150, 41) Cristaphyes

> abyssorum 8.66 28.3 5250 WPac (-72, -41) Cristaphyes anomalus

> 8.69 NA 35 EPac(-74, -44) Cristaphyes anomalus 8.69 NA

> 35 EPac(-57, -46) Cristaphyes anomalus 8.69 NA NA WAtl(29,

> 80) Cristaphyes arctous 8.32 27.0 393 EAtl

>

> tail(dataset)

>

> coordinates SPECIES BS LTS DEPTH OCEAN(-80, 27)

> Zelinkaderes brightae NA 20.1 13.04 WAtl(-80, 27)

> Zelinkaderes floridensis 7.10 12.4 140.00 WAtl(35, 25)

> Zelinkaderes klepali NA 25.0 1.00 WInd(9, 57)

> Zelinkaderes submersus 7.99 21.4 30.00 EAtl(130, 36)

> Zelinkaderes yong NA 12.7 4.50 WAtl

>

> The dataset also include the values of latitude and longitude in separated

> columns.

>

> I extracted positive eigenvector-based spatial filters from a truncated

> matrix of geographic distances among sampling sites. I would like to treat

> spatial filters as candidate explanatory variables in my linear regression

> model. I did this as following:

>

> First of all, I created a neighbor list object (nb). In my case of

> irregular samplings, I used the function knearneight of the R package spdep:

>

> knea8 <-knearneight(coordinates(dataset), longlat=TRUE, k=8)

>

> neib8 <-knn2nb(knea8)

>

> Then, I created a spatial weighting matrix with the function nb2listw of

> the R package spdep:

>

> nb2listw(neib8)

>

> distgab8 <- nbdists(neib8, coordinates(dataset))

>

> str(distgab8)

>

> fdist<-lapply(distgab8, function(x) 1-x/max(dist(coordinates(dataset))))

>

> listwgab8 <- nb2listw(neib8, glist = fdist8, style = "B")

>

> Then, I built spatial predictors to incorporate them in the Gaussian linear

> regression. I did this with the mem function of the R package adespatial,

> as following:

If you are choosing adespatial::mem(), it will probably work differently

from spdep::SpatialFilering(), based on

Tiefelsdorf M, Griffith DA. (2007) Semiparametric Filtering of

Spatial Autocorrelation: The Eigenvector Approach. Environment and

Planning A, 39 (5) 1193 - 1221.

In that approach, the eigenvectors are chosen to reduce the residual

spatial autocorrelation.

>

> mem.gab8 <- mem(listwgab8)

>

> Additionally, Moran's I were computed and tested for each eigenvector with

> the moran.randtest function, as following:

>

> moranI8 <-moran.randtest(mem.gab8, listwgab8, 99)

>

> I obtained some eigenvectors with significant positive spatial

> autocorrelation. Now, I would like to include them in the Gaussian linear

> regression. I tried to do this with the function ME of spdep, as following:

>

Use spdep::SpatialFilering() rather than spdep::ME() for the Gaussian

case. Probably your listwgab8 refers to a different number of observations

than dataset (any missing values in your variables?).

Roger

> GLM1 <- ME(BS~LATITUDE, data=dataset, listw=listwgab8,

> family=gaussian, nsim=99, alpha=0.05)

>

> Unfortunately, I receive this error:

>

> Error in sW %*% var : Cholmod error 'X and/or Y have wrong dimensions' at

> file ../MatrixOps/cholmod_sdmult.c, line 90

>

>

> Would anybody know how I could solve this error? Or, if is there another

> way to perform a spatial eigenvector selection in a Gaussian linear

> regression?

>

> Thank you in advance

>

> Best wishes,

>

> Diego

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>

--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

http://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

> Dear all,

>

> I am currently trying to do a Gaussian linear regression in R with data

> that may be spatially autocorrelated. My dataset contains geographic

> coordinates (value of longitude, value of latitude), species, independent

> variables (BS and LTS) and some explanatory variables; it looks like:

>

> head(dataset)

>

> coordinates SPECIES BS LTS DEPTH OCEAN(155, 47)

> Cristaphyes abyssorum 8.66 28.3 5373 WPac(150, 41) Cristaphyes

> abyssorum 8.66 28.3 5250 WPac (-72, -41) Cristaphyes anomalus

> 8.69 NA 35 EPac(-74, -44) Cristaphyes anomalus 8.69 NA

> 35 EPac(-57, -46) Cristaphyes anomalus 8.69 NA NA WAtl(29,

> 80) Cristaphyes arctous 8.32 27.0 393 EAtl

>

> tail(dataset)

>

> coordinates SPECIES BS LTS DEPTH OCEAN(-80, 27)

> Zelinkaderes brightae NA 20.1 13.04 WAtl(-80, 27)

> Zelinkaderes floridensis 7.10 12.4 140.00 WAtl(35, 25)

> Zelinkaderes klepali NA 25.0 1.00 WInd(9, 57)

> Zelinkaderes submersus 7.99 21.4 30.00 EAtl(130, 36)

> Zelinkaderes yong NA 12.7 4.50 WAtl

>

> The dataset also include the values of latitude and longitude in separated

> columns.

>

> I extracted positive eigenvector-based spatial filters from a truncated

> matrix of geographic distances among sampling sites. I would like to treat

> spatial filters as candidate explanatory variables in my linear regression

> model. I did this as following:

>

> First of all, I created a neighbor list object (nb). In my case of

> irregular samplings, I used the function knearneight of the R package spdep:

>

> knea8 <-knearneight(coordinates(dataset), longlat=TRUE, k=8)

>

> neib8 <-knn2nb(knea8)

>

> Then, I created a spatial weighting matrix with the function nb2listw of

> the R package spdep:

>

> nb2listw(neib8)

>

> distgab8 <- nbdists(neib8, coordinates(dataset))

>

> str(distgab8)

>

> fdist<-lapply(distgab8, function(x) 1-x/max(dist(coordinates(dataset))))

>

> listwgab8 <- nb2listw(neib8, glist = fdist8, style = "B")

>

> Then, I built spatial predictors to incorporate them in the Gaussian linear

> regression. I did this with the mem function of the R package adespatial,

> as following:

If you are choosing adespatial::mem(), it will probably work differently

from spdep::SpatialFilering(), based on

Tiefelsdorf M, Griffith DA. (2007) Semiparametric Filtering of

Spatial Autocorrelation: The Eigenvector Approach. Environment and

Planning A, 39 (5) 1193 - 1221.

In that approach, the eigenvectors are chosen to reduce the residual

spatial autocorrelation.

>

> mem.gab8 <- mem(listwgab8)

>

> Additionally, Moran's I were computed and tested for each eigenvector with

> the moran.randtest function, as following:

>

> moranI8 <-moran.randtest(mem.gab8, listwgab8, 99)

>

> I obtained some eigenvectors with significant positive spatial

> autocorrelation. Now, I would like to include them in the Gaussian linear

> regression. I tried to do this with the function ME of spdep, as following:

>

Use spdep::SpatialFilering() rather than spdep::ME() for the Gaussian

case. Probably your listwgab8 refers to a different number of observations

than dataset (any missing values in your variables?).

Roger

> GLM1 <- ME(BS~LATITUDE, data=dataset, listw=listwgab8,

> family=gaussian, nsim=99, alpha=0.05)

>

> Unfortunately, I receive this error:

>

> Error in sW %*% var : Cholmod error 'X and/or Y have wrong dimensions' at

> file ../MatrixOps/cholmod_sdmult.c, line 90

>

>

> Would anybody know how I could solve this error? Or, if is there another

> way to perform a spatial eigenvector selection in a Gaussian linear

> regression?

>

> Thank you in advance

>

> Best wishes,

>

> Diego

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>

--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

http://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

### Error when performing a spatial eigenvector selection in R

Dear all,

I am currently trying to do a Gaussian linear regression in R with data

that may be spatially autocorrelated. My dataset contains geographic

coordinates (value of longitude, value of latitude), species, independent

variables (BS and LTS) and some explanatory variables; it looks like:

head(dataset)

coordinates SPECIES BS LTS DEPTH OCEAN(155, 47)

Cristaphyes abyssorum 8.66 28.3 5373 WPac(150, 41) Cristaphyes

abyssorum 8.66 28.3 5250 WPac (-72, -41) Cristaphyes anomalus

8.69 NA 35 EPac(-74, -44) Cristaphyes anomalus 8.69 NA

35 EPac(-57, -46) Cristaphyes anomalus 8.69 NA NA WAtl(29,

80) Cristaphyes arctous 8.32 27.0 393 EAtl

tail(dataset)

coordinates SPECIES BS LTS DEPTH OCEAN(-80, 27)

Zelinkaderes brightae NA 20.1 13.04 WAtl(-80, 27)

Zelinkaderes floridensis 7.10 12.4 140.00 WAtl(35, 25)

Zelinkaderes klepali NA 25.0 1.00 WInd(9, 57)

Zelinkaderes submersus 7.99 21.4 30.00 EAtl(130, 36)

Zelinkaderes yong NA 12.7 4.50 WAtl

The dataset also include the values of latitude and longitude in separated

columns.

I extracted positive eigenvector-based spatial filters from a truncated

matrix of geographic distances among sampling sites. I would like to treat

spatial filters as candidate explanatory variables in my linear regression

model. I did this as following:

First of all, I created a neighbor list object (nb). In my case of

irregular samplings, I used the function knearneight of the R package spdep:

knea8 <-knearneight(coordinates(dataset), longlat=TRUE, k=8)

neib8 <-knn2nb(knea8)

Then, I created a spatial weighting matrix with the function nb2listw of

the R package spdep:

nb2listw(neib8)

distgab8 <- nbdists(neib8, coordinates(dataset))

str(distgab8)

fdist<-lapply(distgab8, function(x) 1-x/max(dist(coordinates(dataset))))

listwgab8 <- nb2listw(neib8, glist = fdist8, style = "B")

Then, I built spatial predictors to incorporate them in the Gaussian linear

regression. I did this with the mem function of the R package adespatial,

as following:

mem.gab8 <- mem(listwgab8)

Additionally, Moran's I were computed and tested for each eigenvector with

the moran.randtest function, as following:

moranI8 <-moran.randtest(mem.gab8, listwgab8, 99)

I obtained some eigenvectors with significant positive spatial

autocorrelation. Now, I would like to include them in the Gaussian linear

regression. I tried to do this with the function ME of spdep, as following:

GLM1 <- ME(BS~LATITUDE, data=dataset, listw=listwgab8,

family=gaussian, nsim=99, alpha=0.05)

Unfortunately, I receive this error:

Error in sW %*% var : Cholmod error 'X and/or Y have wrong dimensions' at

file ../MatrixOps/cholmod_sdmult.c, line 90

Would anybody know how I could solve this error? Or, if is there another

way to perform a spatial eigenvector selection in a Gaussian linear

regression?

Thank you in advance

Best wishes,

Diego

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

I am currently trying to do a Gaussian linear regression in R with data

that may be spatially autocorrelated. My dataset contains geographic

coordinates (value of longitude, value of latitude), species, independent

variables (BS and LTS) and some explanatory variables; it looks like:

head(dataset)

coordinates SPECIES BS LTS DEPTH OCEAN(155, 47)

Cristaphyes abyssorum 8.66 28.3 5373 WPac(150, 41) Cristaphyes

abyssorum 8.66 28.3 5250 WPac (-72, -41) Cristaphyes anomalus

8.69 NA 35 EPac(-74, -44) Cristaphyes anomalus 8.69 NA

35 EPac(-57, -46) Cristaphyes anomalus 8.69 NA NA WAtl(29,

80) Cristaphyes arctous 8.32 27.0 393 EAtl

tail(dataset)

coordinates SPECIES BS LTS DEPTH OCEAN(-80, 27)

Zelinkaderes brightae NA 20.1 13.04 WAtl(-80, 27)

Zelinkaderes floridensis 7.10 12.4 140.00 WAtl(35, 25)

Zelinkaderes klepali NA 25.0 1.00 WInd(9, 57)

Zelinkaderes submersus 7.99 21.4 30.00 EAtl(130, 36)

Zelinkaderes yong NA 12.7 4.50 WAtl

The dataset also include the values of latitude and longitude in separated

columns.

I extracted positive eigenvector-based spatial filters from a truncated

matrix of geographic distances among sampling sites. I would like to treat

spatial filters as candidate explanatory variables in my linear regression

model. I did this as following:

First of all, I created a neighbor list object (nb). In my case of

irregular samplings, I used the function knearneight of the R package spdep:

knea8 <-knearneight(coordinates(dataset), longlat=TRUE, k=8)

neib8 <-knn2nb(knea8)

Then, I created a spatial weighting matrix with the function nb2listw of

the R package spdep:

nb2listw(neib8)

distgab8 <- nbdists(neib8, coordinates(dataset))

str(distgab8)

fdist<-lapply(distgab8, function(x) 1-x/max(dist(coordinates(dataset))))

listwgab8 <- nb2listw(neib8, glist = fdist8, style = "B")

Then, I built spatial predictors to incorporate them in the Gaussian linear

regression. I did this with the mem function of the R package adespatial,

as following:

mem.gab8 <- mem(listwgab8)

Additionally, Moran's I were computed and tested for each eigenvector with

the moran.randtest function, as following:

moranI8 <-moran.randtest(mem.gab8, listwgab8, 99)

I obtained some eigenvectors with significant positive spatial

autocorrelation. Now, I would like to include them in the Gaussian linear

regression. I tried to do this with the function ME of spdep, as following:

GLM1 <- ME(BS~LATITUDE, data=dataset, listw=listwgab8,

family=gaussian, nsim=99, alpha=0.05)

Unfortunately, I receive this error:

Error in sW %*% var : Cholmod error 'X and/or Y have wrong dimensions' at

file ../MatrixOps/cholmod_sdmult.c, line 90

Would anybody know how I could solve this error? Or, if is there another

way to perform a spatial eigenvector selection in a Gaussian linear

regression?

Thank you in advance

Best wishes,

Diego

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Clustering Sd Error in Spatial Models.

Hello,

I am currently working on a project estimating a spatial panel model.

Because I also estimate non-spatial models I am computing the clustered

standard errors following Stock and Watson (2008).

I tried to do the same for my spatial models however I am running into the

some errors (depending if I bootstrap or not my clustered

variance-covariance matrix). Below is a reproducible example.

#### Example

> data(Produc, package = "plm")

> data(usaww)

>

> fm <- log(gsp) ~ log(pcap) + log(pc) + log(emp) + unemp

>

> fespaterr <- spml(fm, data = Produc, listw = mat2listw(usaww),

+ model="within", spatial.error="kkp")

>

> library(lmtest)

> library(sandwich)

>

> vcov_test <- vcov_test <- vcovCL(fespaterr, cluster = Produc$state)

Error in UseMethod("estfun") :

no applicable method for 'estfun' applied to an object of class "splm"

> vcov_boot <- vcovBS(fespaterr, cluster = Produc$state, R=250)

Error in terms.default(x) : no terms component nor attribute

Error in terms.default(x) : no terms component nor attribute

Error in terms.default(x) : no terms component nor attributebute

#############

### Non spatial

library(lfe)

fe_cluster <- felm(log(gsp) ~ log(pcap) + log(pc) + log(emp) +

unemp|year+state|0|state,data=Produc)

> summary(fe)

Call:

felm(formula = log(gsp) ~ log(pcap) + log(pc) + log(emp) + unemp |

year + state | 0 | state, data = Produc)

Residuals:

Min 1Q Median 3Q Max

-0.160369 -0.018026 -0.000859 0.016745 0.170752

Coefficients:

Estimate Cluster s.e. t value Pr(>|t|)

log(pcap) -0.030176 0.060042 -0.503 0.6154

log(pc) 0.168828 0.088331 1.911 0.0563 .

log(emp) 0.769306 0.087700 8.772 <2e-16 ***

unemp -0.004221 0.003294 -1.281 0.2005

---

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

##########

My questions are:

1) Is it possible to cluster the variance-covariance matrix on spatial

models?

2) If so, what is the correct procedure?

Thank you for your help,

Amir

--

Amir B Ferreira Neto

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

I am currently working on a project estimating a spatial panel model.

Because I also estimate non-spatial models I am computing the clustered

standard errors following Stock and Watson (2008).

I tried to do the same for my spatial models however I am running into the

some errors (depending if I bootstrap or not my clustered

variance-covariance matrix). Below is a reproducible example.

#### Example

> data(Produc, package = "plm")

> data(usaww)

>

> fm <- log(gsp) ~ log(pcap) + log(pc) + log(emp) + unemp

>

> fespaterr <- spml(fm, data = Produc, listw = mat2listw(usaww),

+ model="within", spatial.error="kkp")

>

> library(lmtest)

> library(sandwich)

>

> vcov_test <- vcov_test <- vcovCL(fespaterr, cluster = Produc$state)

Error in UseMethod("estfun") :

no applicable method for 'estfun' applied to an object of class "splm"

> vcov_boot <- vcovBS(fespaterr, cluster = Produc$state, R=250)

Error in terms.default(x) : no terms component nor attribute

Error in terms.default(x) : no terms component nor attribute

Error in terms.default(x) : no terms component nor attributebute

#############

### Non spatial

library(lfe)

fe_cluster <- felm(log(gsp) ~ log(pcap) + log(pc) + log(emp) +

unemp|year+state|0|state,data=Produc)

> summary(fe)

Call:

felm(formula = log(gsp) ~ log(pcap) + log(pc) + log(emp) + unemp |

year + state | 0 | state, data = Produc)

Residuals:

Min 1Q Median 3Q Max

-0.160369 -0.018026 -0.000859 0.016745 0.170752

Coefficients:

Estimate Cluster s.e. t value Pr(>|t|)

log(pcap) -0.030176 0.060042 -0.503 0.6154

log(pc) 0.168828 0.088331 1.911 0.0563 .

log(emp) 0.769306 0.087700 8.772 <2e-16 ***

unemp -0.004221 0.003294 -1.281 0.2005

---

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

##########

My questions are:

1) Is it possible to cluster the variance-covariance matrix on spatial

models?

2) If so, what is the correct procedure?

Thank you for your help,

Amir

--

Amir B Ferreira Neto

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: spgm

Please provide a reproducible example using the Produc dataset, so that it is clearer what the problem is. I do not know that there is any literature on such impacts, please provide references.

Roger

Roger Bivand

Norwegian School of Economics

Bergen, Norway

On Wed, Oct 3, 2018 at 11:21 AM +0200, "Hulényi Martin" <[hidden email]<mailto:[hidden email]>> wrote:

Thank you very much !

I have one more question regarding the output. I have also one endogenous variable in the model. Your code worked, but it did not show me the indirect and direct effects for the

endogenous varibale. Here is my regex:

spd_01 <- spgm(gdppcgr~lefpayr+lpopgr+linvr+lagwgipca + laglgdppc,

data=esifpdata, listw=dm1.lw,

model="within", lag=TRUE, spatial.error= FALSE, endog = ~ lefpayr,

instruments=~areaprop,

method="w2sls")

matrix1 <- kronecker(diag(length(unique(esifpdata$years))), dm1)

listw1 <- mat2listw(matrix1, style="W")

tr <- trW(as(listw1, "CsparseMatrix"), m=100)

impacts(spd_01, listw=listw1)

impacts(spd_01, tr=tr)

summary(impacts(spd_01, tr=tr, R=1000), zstats=TRUE, short=TRUE)

Best,

MArtin Hulényi

________________________________________

Od: Roger Bivand

Odoslané: 29. septembra 2018 14:52

Komu: Hulényi Martin

Kópia: [hidden email]

Predmet: Re: [R-sig-Geo] spgm

On Sat, 29 Sep 2018, Hulényi Martin wrote:

> Dear all,

>

>

> I would like to ask if there is a possibility to apply something

> similiar to the "impacts" from spdep package for SAR regressions using

> the spgm function from the splm package.

>

A reprex would have helped. Here is mine:

data(Produc, package = "plm")

data(usaww) # dense row-standardised weights matrix

GM <- spgm(log(gsp)~log(pcap)+log(pc)+log(emp)+unemp, data=Produc,

listw = usaww, moments="fullweights", lag=TRUE, spatial.error = FALSE)

class(GM)

?impacts.stsls # spdep method for stsls objects

head(Produc)

length(unique(Produc$year)) # T

big <- kronecker(diag(length(unique(Produc$year))), usaww)

listw <- mat2listw(big, style="W")

tr <- trW(as(listw, "CsparseMatrix"), m=100)

impacts(GM, listw=listw)

impacts(GM, tr=tr)

summary(impacts(GM, tr=tr, R=1000), zstats=TRUE, short=TRUE)

The splm:::impacts.splm() method cannot dispatch on stsls objects, so they

try to use the spdep:::impacts.stsls() method, but there the data rows are

n x T but listw is only of n rows. Looking inside splm:::impacts.splm(),

you see that a sparse kronecker product matrix is constructed - either do

the same if your n x T is large, or follow the above using a dense

kronecker product and cast back to listw representation to create the

trace vector.

Hope this clarifies,

Roger

>

> Best regards,

>

>

> Martin Hul???nyi ?

>

>

> [eco.jpg] Pred vytla???en???m tohto mailu zv???te pros???m vplyv na ???ivotn??? prostredie. ???akujeme.

> Please consider the environment before printing this e-mail. Thanks

>

> [[alternative HTML version deleted]]

>

>

--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

http://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

[eco.jpg] Pred vytlačením tohto mailu zvážte prosím vplyv na životné prostredie. Ďakujeme.

Please consider the environment before printing this e-mail. Thanks

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

Roger

Roger Bivand

Norwegian School of Economics

Bergen, Norway

On Wed, Oct 3, 2018 at 11:21 AM +0200, "Hulényi Martin" <[hidden email]<mailto:[hidden email]>> wrote:

Thank you very much !

I have one more question regarding the output. I have also one endogenous variable in the model. Your code worked, but it did not show me the indirect and direct effects for the

endogenous varibale. Here is my regex:

spd_01 <- spgm(gdppcgr~lefpayr+lpopgr+linvr+lagwgipca + laglgdppc,

data=esifpdata, listw=dm1.lw,

model="within", lag=TRUE, spatial.error= FALSE, endog = ~ lefpayr,

instruments=~areaprop,

method="w2sls")

matrix1 <- kronecker(diag(length(unique(esifpdata$years))), dm1)

listw1 <- mat2listw(matrix1, style="W")

tr <- trW(as(listw1, "CsparseMatrix"), m=100)

impacts(spd_01, listw=listw1)

impacts(spd_01, tr=tr)

summary(impacts(spd_01, tr=tr, R=1000), zstats=TRUE, short=TRUE)

Best,

MArtin Hulényi

________________________________________

Od: Roger Bivand

Odoslané: 29. septembra 2018 14:52

Komu: Hulényi Martin

Kópia: [hidden email]

Predmet: Re: [R-sig-Geo] spgm

On Sat, 29 Sep 2018, Hulényi Martin wrote:

> Dear all,

>

>

> I would like to ask if there is a possibility to apply something

> similiar to the "impacts" from spdep package for SAR regressions using

> the spgm function from the splm package.

>

A reprex would have helped. Here is mine:

data(Produc, package = "plm")

data(usaww) # dense row-standardised weights matrix

GM <- spgm(log(gsp)~log(pcap)+log(pc)+log(emp)+unemp, data=Produc,

listw = usaww, moments="fullweights", lag=TRUE, spatial.error = FALSE)

class(GM)

?impacts.stsls # spdep method for stsls objects

head(Produc)

length(unique(Produc$year)) # T

big <- kronecker(diag(length(unique(Produc$year))), usaww)

listw <- mat2listw(big, style="W")

tr <- trW(as(listw, "CsparseMatrix"), m=100)

impacts(GM, listw=listw)

impacts(GM, tr=tr)

summary(impacts(GM, tr=tr, R=1000), zstats=TRUE, short=TRUE)

The splm:::impacts.splm() method cannot dispatch on stsls objects, so they

try to use the spdep:::impacts.stsls() method, but there the data rows are

n x T but listw is only of n rows. Looking inside splm:::impacts.splm(),

you see that a sparse kronecker product matrix is constructed - either do

the same if your n x T is large, or follow the above using a dense

kronecker product and cast back to listw representation to create the

trace vector.

Hope this clarifies,

Roger

>

> Best regards,

>

>

> Martin Hul???nyi ?

>

>

> [eco.jpg] Pred vytla???en???m tohto mailu zv???te pros???m vplyv na ???ivotn??? prostredie. ???akujeme.

> Please consider the environment before printing this e-mail. Thanks

>

> [[alternative HTML version deleted]]

>

>

--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

http://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

[eco.jpg] Pred vytlačením tohto mailu zvážte prosím vplyv na životné prostredie. Ďakujeme.

Please consider the environment before printing this e-mail. Thanks

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

### Re: spgm

Thank you very much !

I have one more question regarding the output. I have also one endogenous variable in the model. Your code worked, but it did not show me the indirect and direct effects for the

endogenous varibale. Here is my regex:

spd_01 <- spgm(gdppcgr~lefpayr+lpopgr+linvr+lagwgipca + laglgdppc,

data=esifpdata, listw=dm1.lw,

model="within", lag=TRUE, spatial.error= FALSE, endog = ~ lefpayr,

instruments=~areaprop,

method="w2sls")

matrix1 <- kronecker(diag(length(unique(esifpdata$years))), dm1)

listw1 <- mat2listw(matrix1, style="W")

tr <- trW(as(listw1, "CsparseMatrix"), m=100)

impacts(spd_01, listw=listw1)

impacts(spd_01, tr=tr)

summary(impacts(spd_01, tr=tr, R=1000), zstats=TRUE, short=TRUE)

Best,

MArtin Hulényi

________________________________________

Od: Roger Bivand <[hidden email]>

Odoslané: 29. septembra 2018 14:52

Komu: Hulényi Martin

Kópia: [hidden email]

Predmet: Re: [R-sig-Geo] spgm

On Sat, 29 Sep 2018, Hulényi Martin wrote:

> Dear all,

>

>

> I would like to ask if there is a possibility to apply something

> similiar to the "impacts" from spdep package for SAR regressions using

> the spgm function from the splm package.

>

A reprex would have helped. Here is mine:

data(Produc, package = "plm")

data(usaww) # dense row-standardised weights matrix

GM <- spgm(log(gsp)~log(pcap)+log(pc)+log(emp)+unemp, data=Produc,

listw = usaww, moments="fullweights", lag=TRUE, spatial.error = FALSE)

class(GM)

?impacts.stsls # spdep method for stsls objects

head(Produc)

length(unique(Produc$year)) # T

big <- kronecker(diag(length(unique(Produc$year))), usaww)

listw <- mat2listw(big, style="W")

tr <- trW(as(listw, "CsparseMatrix"), m=100)

impacts(GM, listw=listw)

impacts(GM, tr=tr)

summary(impacts(GM, tr=tr, R=1000), zstats=TRUE, short=TRUE)

The splm:::impacts.splm() method cannot dispatch on stsls objects, so they

try to use the spdep:::impacts.stsls() method, but there the data rows are

n x T but listw is only of n rows. Looking inside splm:::impacts.splm(),

you see that a sparse kronecker product matrix is constructed - either do

the same if your n x T is large, or follow the above using a dense

kronecker product and cast back to listw representation to create the

trace vector.

Hope this clarifies,

Roger

>

> Best regards,

>

>

> Martin Hul???nyi ?

>

>

> [eco.jpg] Pred vytla???en???m tohto mailu zv???te pros???m vplyv na ???ivotn??? prostredie. ???akujeme.

> Please consider the environment before printing this e-mail. Thanks

>

> [[alternative HTML version deleted]]

>

>

--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

http://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

[eco.jpg] Pred vytlačením tohto mailu zvážte prosím vplyv na životné prostredie. Ďakujeme.

Please consider the environment before printing this e-mail. Thanks

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

I have one more question regarding the output. I have also one endogenous variable in the model. Your code worked, but it did not show me the indirect and direct effects for the

endogenous varibale. Here is my regex:

spd_01 <- spgm(gdppcgr~lefpayr+lpopgr+linvr+lagwgipca + laglgdppc,

data=esifpdata, listw=dm1.lw,

model="within", lag=TRUE, spatial.error= FALSE, endog = ~ lefpayr,

instruments=~areaprop,

method="w2sls")

matrix1 <- kronecker(diag(length(unique(esifpdata$years))), dm1)

listw1 <- mat2listw(matrix1, style="W")

tr <- trW(as(listw1, "CsparseMatrix"), m=100)

impacts(spd_01, listw=listw1)

impacts(spd_01, tr=tr)

summary(impacts(spd_01, tr=tr, R=1000), zstats=TRUE, short=TRUE)

Best,

MArtin Hulényi

________________________________________

Od: Roger Bivand <[hidden email]>

Odoslané: 29. septembra 2018 14:52

Komu: Hulényi Martin

Kópia: [hidden email]

Predmet: Re: [R-sig-Geo] spgm

On Sat, 29 Sep 2018, Hulényi Martin wrote:

> Dear all,

>

>

> I would like to ask if there is a possibility to apply something

> similiar to the "impacts" from spdep package for SAR regressions using

> the spgm function from the splm package.

>

A reprex would have helped. Here is mine:

data(Produc, package = "plm")

data(usaww) # dense row-standardised weights matrix

GM <- spgm(log(gsp)~log(pcap)+log(pc)+log(emp)+unemp, data=Produc,

listw = usaww, moments="fullweights", lag=TRUE, spatial.error = FALSE)

class(GM)

?impacts.stsls # spdep method for stsls objects

head(Produc)

length(unique(Produc$year)) # T

big <- kronecker(diag(length(unique(Produc$year))), usaww)

listw <- mat2listw(big, style="W")

tr <- trW(as(listw, "CsparseMatrix"), m=100)

impacts(GM, listw=listw)

impacts(GM, tr=tr)

summary(impacts(GM, tr=tr, R=1000), zstats=TRUE, short=TRUE)

The splm:::impacts.splm() method cannot dispatch on stsls objects, so they

try to use the spdep:::impacts.stsls() method, but there the data rows are

n x T but listw is only of n rows. Looking inside splm:::impacts.splm(),

you see that a sparse kronecker product matrix is constructed - either do

the same if your n x T is large, or follow the above using a dense

kronecker product and cast back to listw representation to create the

trace vector.

Hope this clarifies,

Roger

>

> Best regards,

>

>

> Martin Hul???nyi ?

>

>

> [eco.jpg] Pred vytla???en???m tohto mailu zv???te pros???m vplyv na ???ivotn??? prostredie. ???akujeme.

> Please consider the environment before printing this e-mail. Thanks

>

> [[alternative HTML version deleted]]

>

>

--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

http://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

[eco.jpg] Pred vytlačením tohto mailu zvážte prosím vplyv na životné prostredie. Ďakujeme.

Please consider the environment before printing this e-mail. Thanks

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: Linear referencing in R

Thank you very much, Kent! This is pretty much what I was looking for. The

only problem is that the new shapefile didn't come with the attributes from

the topoESQ data frame. I made some adjustments, that probably are not the

best ones, but I still made it work.

topoESQ_comp <- read.csv("Files/Topo_estacas_E.csv") %>%

select(-1) %>%

rename(height = Altura,

side = Lado) %>%

filter(to_km <= 871)

Estacas_1m <- st_read("Files/ESTACAS_1m_EFC_0-871_Policonic_SIRGAS.shp")

topoESQ_comp %>%

purrr::pmap(function(from_km, to_km, ...) {

st_linestring(do.call(rbind,

Estacas_1m$geometry[((1000*from_km):(1000*to_km))+1]))

}) %>%

st_sfc(., crs = 5880) %>%

cbind(topoESQ_comp, .) %>%

st_as_sf(.) %>%

st_write(., dsn = "Files/Topo_esq.shp", delete_dsn = T)

Best regards,

Rubem Dornas.

Em sáb, 29 de set de 2018 às 22:17, Kent Johnson <[hidden email]>

escreveu:

> You need to snip out the sections of the railroad file rather than

> connecting the endpoints. Maybe this is closer to what you want?

>

> segments = topoESQ_comp %>%

> filter(to_km <= 871) %>%

> purrr::pmap(function(from_km, to_km, ...) {

> st_linestring(do.call(rbind,

> Estacas_1m$geometry[((1000*from_km):(1000*to_km))+1]))

> }) %>%

> st_sfc

>

> Kent Johnson

>

>

>> Date: Fri, 28 Sep 2018 19:52:03 -0300

>> From: Rubem Dornas <[hidden email]>

>> To: [hidden email]

>> Subject: [R-sig-Geo] Linear referencing in R

>> Hi, people! I hope I'll be not to speculative in my question and that you

>> can comprehend my problem.

>>

>> Well, I have a railroad shapefile and I have two csv files corresponding

>> to

>> the topography in each side of the railroad. The csv are organized in this

>> way:

>>

>> ID_topo, from_km, to_km, height

>> 1, 0, 1.91, 15

>> 2, 1.91, 2.23, -3

>>

>> I created a point shapefile for each meter of the railroad and then I

>> proceeded with a join (not spatial) between the from_km of the topography

>> data frame and the km mark (km_calc) from the point railroad.

>>

>> My goal is to create shapefiles for each of the railroadside topography.

>> The issue is that when I try to make a new line shapefile from topography

>> based on the points of the railroad, what I get is a line that has doesn't

>> follow the curvature of the railroad. Using the example of the csv above,

>> what I get are several straight lines linked by the from_km to to_km.

>> (Here

>> is a link to a print screen from QGIS:

>>

>> https://www.dropbox.com/s/erfsst8pasoqj64/Captura%20de%20tela%202018-09-28%2019.49.47.png?dl=0

>> <http://Image>)

>>

>>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

only problem is that the new shapefile didn't come with the attributes from

the topoESQ data frame. I made some adjustments, that probably are not the

best ones, but I still made it work.

topoESQ_comp <- read.csv("Files/Topo_estacas_E.csv") %>%

select(-1) %>%

rename(height = Altura,

side = Lado) %>%

filter(to_km <= 871)

Estacas_1m <- st_read("Files/ESTACAS_1m_EFC_0-871_Policonic_SIRGAS.shp")

topoESQ_comp %>%

purrr::pmap(function(from_km, to_km, ...) {

st_linestring(do.call(rbind,

Estacas_1m$geometry[((1000*from_km):(1000*to_km))+1]))

}) %>%

st_sfc(., crs = 5880) %>%

cbind(topoESQ_comp, .) %>%

st_as_sf(.) %>%

st_write(., dsn = "Files/Topo_esq.shp", delete_dsn = T)

Best regards,

Rubem Dornas.

Em sáb, 29 de set de 2018 às 22:17, Kent Johnson <[hidden email]>

escreveu:

> You need to snip out the sections of the railroad file rather than

> connecting the endpoints. Maybe this is closer to what you want?

>

> segments = topoESQ_comp %>%

> filter(to_km <= 871) %>%

> purrr::pmap(function(from_km, to_km, ...) {

> st_linestring(do.call(rbind,

> Estacas_1m$geometry[((1000*from_km):(1000*to_km))+1]))

> }) %>%

> st_sfc

>

> Kent Johnson

>

>

>> Date: Fri, 28 Sep 2018 19:52:03 -0300

>> From: Rubem Dornas <[hidden email]>

>> To: [hidden email]

>> Subject: [R-sig-Geo] Linear referencing in R

>> Hi, people! I hope I'll be not to speculative in my question and that you

>> can comprehend my problem.

>>

>> Well, I have a railroad shapefile and I have two csv files corresponding

>> to

>> the topography in each side of the railroad. The csv are organized in this

>> way:

>>

>> ID_topo, from_km, to_km, height

>> 1, 0, 1.91, 15

>> 2, 1.91, 2.23, -3

>>

>> I created a point shapefile for each meter of the railroad and then I

>> proceeded with a join (not spatial) between the from_km of the topography

>> data frame and the km mark (km_calc) from the point railroad.

>>

>> My goal is to create shapefiles for each of the railroadside topography.

>> The issue is that when I try to make a new line shapefile from topography

>> based on the points of the railroad, what I get is a line that has doesn't

>> follow the curvature of the railroad. Using the example of the csv above,

>> what I get are several straight lines linked by the from_km to to_km.

>> (Here

>> is a link to a print screen from QGIS:

>>

>> https://www.dropbox.com/s/erfsst8pasoqj64/Captura%20de%20tela%202018-09-28%2019.49.47.png?dl=0

>> <http://Image>)

>>

>>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Video recordings from the OpenGeoHub Summer School IBOT, Prague August 19-25, 2018

The video recordings from the OpenGeoHub Summer School IBOT, Prague

August 19-25, 2018 are now available via our youtube channel:

https://www.youtube.com/channel/UC6HFFFYiV4zEYJlQMIXemWA

I am trying to organize all current and past materials into a structured

course/archive so that everyone should be able to easier find her/his

topics of interest:

https://opengeohub.org/course

*this might take me few weeks because we have collected many materials

over years.

Many many thanks to the lecturers: Edzer Pebesma (IfGI University of

Muenster, Muenster Germany), Roger Bivand (Department of Economics,

Norwegian School of Economics, Bergen, Norway), Markus Neteler

(Mundialis, Bonn, Germany), Tim Appelhans (GfK Geomarketing, Germany),

Robin Lovelace (University of Leeds, UK), Jannes Münchow (Geographic

Information Science, Friedrich Schiller University Jena, Jena, Germany),

Jakub Nowosad (Space Informatics Lab, University of Cincinnati, Ohio

USA), Veronica Andreo (National Institute of Tropical Medicine INMeT,

Argentina) and Hanna Meyer & Chris Reudenbach (Environmental

Informatics, Philipps University Marburg, Germany) for coming to Prague

and for sharing their materials!

And many many thanks to IBOT's department of GIS and remote sensing

Matej Man and Jan Wild for being such excellent hosts!

yours,

Tom Hengl

The OpenGeoHub Foundation

http://www.opengeohub.org/about

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: Linear referencing in R

You need to snip out the sections of the railroad file rather than

connecting the endpoints. Maybe this is closer to what you want?

segments = topoESQ_comp %>%

filter(to_km <= 871) %>%

purrr::pmap(function(from_km, to_km, ...) {

st_linestring(do.call(rbind,

Estacas_1m$geometry[((1000*from_km):(1000*to_km))+1]))

}) %>%

st_sfc

Kent Johnson

> Date: Fri, 28 Sep 2018 19:52:03 -0300

> From: Rubem Dornas <[hidden email]>

> To: [hidden email]

> Subject: [R-sig-Geo] Linear referencing in R

> Hi, people! I hope I'll be not to speculative in my question and that you

> can comprehend my problem.

>

> Well, I have a railroad shapefile and I have two csv files corresponding to

> the topography in each side of the railroad. The csv are organized in this

> way:

>

> ID_topo, from_km, to_km, height

> 1, 0, 1.91, 15

> 2, 1.91, 2.23, -3

>

> I created a point shapefile for each meter of the railroad and then I

> proceeded with a join (not spatial) between the from_km of the topography

> data frame and the km mark (km_calc) from the point railroad.

>

> My goal is to create shapefiles for each of the railroadside topography.

> The issue is that when I try to make a new line shapefile from topography

> based on the points of the railroad, what I get is a line that has doesn't

> follow the curvature of the railroad. Using the example of the csv above,

> what I get are several straight lines linked by the from_km to to_km. (Here

> is a link to a print screen from QGIS:

>

> https://www.dropbox.com/s/erfsst8pasoqj64/Captura%20de%20tela%202018-09-28%2019.49.47.png?dl=0

> <http://Image>)

>

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

connecting the endpoints. Maybe this is closer to what you want?

segments = topoESQ_comp %>%

filter(to_km <= 871) %>%

purrr::pmap(function(from_km, to_km, ...) {

st_linestring(do.call(rbind,

Estacas_1m$geometry[((1000*from_km):(1000*to_km))+1]))

}) %>%

st_sfc

Kent Johnson

> Date: Fri, 28 Sep 2018 19:52:03 -0300

> From: Rubem Dornas <[hidden email]>

> To: [hidden email]

> Subject: [R-sig-Geo] Linear referencing in R

> Hi, people! I hope I'll be not to speculative in my question and that you

> can comprehend my problem.

>

> Well, I have a railroad shapefile and I have two csv files corresponding to

> the topography in each side of the railroad. The csv are organized in this

> way:

>

> ID_topo, from_km, to_km, height

> 1, 0, 1.91, 15

> 2, 1.91, 2.23, -3

>

> I created a point shapefile for each meter of the railroad and then I

> proceeded with a join (not spatial) between the from_km of the topography

> data frame and the km mark (km_calc) from the point railroad.

>

> My goal is to create shapefiles for each of the railroadside topography.

> The issue is that when I try to make a new line shapefile from topography

> based on the points of the railroad, what I get is a line that has doesn't

> follow the curvature of the railroad. Using the example of the csv above,

> what I get are several straight lines linked by the from_km to to_km. (Here

> is a link to a print screen from QGIS:

>

> https://www.dropbox.com/s/erfsst8pasoqj64/Captura%20de%20tela%202018-09-28%2019.49.47.png?dl=0

> <http://Image>)

>

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: spgm

On Sat, 29 Sep 2018, Hulényi Martin wrote:

> Dear all,

>

>

> I would like to ask if there is a possibility to apply something

> similiar to the "impacts" from spdep package for SAR regressions using

> the spgm function from the splm package.

>

A reprex would have helped. Here is mine:

data(Produc, package = "plm")

data(usaww) # dense row-standardised weights matrix

GM <- spgm(log(gsp)~log(pcap)+log(pc)+log(emp)+unemp, data=Produc,

listw = usaww, moments="fullweights", lag=TRUE, spatial.error = FALSE)

class(GM)

?impacts.stsls # spdep method for stsls objects

head(Produc)

length(unique(Produc$year)) # T

big <- kronecker(diag(length(unique(Produc$year))), usaww)

listw <- mat2listw(big, style="W")

tr <- trW(as(listw, "CsparseMatrix"), m=100)

impacts(GM, listw=listw)

impacts(GM, tr=tr)

summary(impacts(GM, tr=tr, R=1000), zstats=TRUE, short=TRUE)

The splm:::impacts.splm() method cannot dispatch on stsls objects, so they

try to use the spdep:::impacts.stsls() method, but there the data rows are

n x T but listw is only of n rows. Looking inside splm:::impacts.splm(),

you see that a sparse kronecker product matrix is constructed - either do

the same if your n x T is large, or follow the above using a dense

kronecker product and cast back to listw representation to create the

trace vector.

Hope this clarifies,

Roger

>

> Best regards,

>

>

> Martin Hul???nyi ?

>

>

> [eco.jpg] Pred vytla???en???m tohto mailu zv???te pros???m vplyv na ???ivotn??? prostredie. ???akujeme.

> Please consider the environment before printing this e-mail. Thanks

>

> [[alternative HTML version deleted]]

>

> --

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

http://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

> Dear all,

>

>

> I would like to ask if there is a possibility to apply something

> similiar to the "impacts" from spdep package for SAR regressions using

> the spgm function from the splm package.

>

A reprex would have helped. Here is mine:

data(Produc, package = "plm")

data(usaww) # dense row-standardised weights matrix

GM <- spgm(log(gsp)~log(pcap)+log(pc)+log(emp)+unemp, data=Produc,

listw = usaww, moments="fullweights", lag=TRUE, spatial.error = FALSE)

class(GM)

?impacts.stsls # spdep method for stsls objects

head(Produc)

length(unique(Produc$year)) # T

big <- kronecker(diag(length(unique(Produc$year))), usaww)

listw <- mat2listw(big, style="W")

tr <- trW(as(listw, "CsparseMatrix"), m=100)

impacts(GM, listw=listw)

impacts(GM, tr=tr)

summary(impacts(GM, tr=tr, R=1000), zstats=TRUE, short=TRUE)

The splm:::impacts.splm() method cannot dispatch on stsls objects, so they

try to use the spdep:::impacts.stsls() method, but there the data rows are

n x T but listw is only of n rows. Looking inside splm:::impacts.splm(),

you see that a sparse kronecker product matrix is constructed - either do

the same if your n x T is large, or follow the above using a dense

kronecker product and cast back to listw representation to create the

trace vector.

Hope this clarifies,

Roger

>

> Best regards,

>

>

> Martin Hul???nyi ?

>

>

> [eco.jpg] Pred vytla???en???m tohto mailu zv???te pros???m vplyv na ???ivotn??? prostredie. ???akujeme.

> Please consider the environment before printing this e-mail. Thanks

>

> [[alternative HTML version deleted]]

>

> --

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

http://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

### Re: Consolidated SRS database/list?

This blog:

https://erouault.blogspot.com/2018/09/srs-barn-raising-4th-report.html

gives the status on the proposed generic db to ship with PROJ and be used

by software using PROJ.

Roger

On Sat, 22 Sep 2018, Alex Mandel wrote:

> Though it requires Internet what about hitting the epsg.io API described

> on https://github.com/klokantech/epsg.io

>

> Thanks,

> Alex

>

> On 09/22/2018 05:23 AM, Vijay Lulla wrote:

>> Alex,

>> Thanks for QGIS's srs.db! I wasn't aware of it.

>>

>> I currently use the spatial_ref_sys from PostGIS to create a SRS data frame

>> but plan to use rgdal::make_EPSG more often. The reason I brought this up

>> is because on the lab computers (cannot install stuff on it) where I teach

>> there is no PostGIS and I didn't know how to lookup EPSG codes for various

>> SRS definitions from within R. I hadn't thought of Spatialite metadata as

>> a viable alternative. So, thanks for that. I will look into it and most

>> likely use it in conjunction with make_EPSG!

>>

>> Finally, I am really looking forward to the consolidated SRS database from

>> the gdal barn-raising effort! This consolidated database will be

>> invaluable, and of great aid/service, to the geospatial community, IMO.

>> Thanks,

>> Vijay.

>>

>> On Sat, Sep 22, 2018 at 1:22 AM Alex Mandel <[hidden email]>

>> wrote:

>>

>>> QGIS makes one

>>> https://github.com/qgis/QGIS/blob/master/resources/srs.db

>>> There's some script in the build that updates it also, not without issue:

>>> https://issues.qgis.org/issues/17993

>>>

>>> I suppose you could also dump out how PostGIS does it to Sqlite, or use

>>> the Spatialite metadata table.

>>> https://www.gaia-gis.it/gaia-sins/spatialite-cookbook/html/metadata.html

>>>

>>> But the thread mentioned that goes back to the MetaCRS mailing list is

>>> probably the right place in the community to revive the discussion.

>>>

>>> Seems like something to encourage, and a good topic for an OSGeo

>>> sponsored sprint.

>>>

>>> Thanks,

>>> Alex

>>>

>>> On 09/21/2018 12:32 AM, Roger Bivand wrote:

>>>> On Thu, 20 Sep 2018, Vijay Lulla wrote:

>>>>

>>>>> Ok, thanks! While the page provided information about the project and

>>>>> its

>>>>> funding status I couldn't find the SQLite database. Do you happen to

>>>>> know

>>>>> when this will be available?

>>>>

>>>> No more than is on that page, plus the time needed to re-write plenty of

>>>> sf, lwgeom, rgdal and sp. At that stage, contributions welcome!

>>>>

>>>> Roger

>>>>

>>>>>

>>>>> On Thu, Sep 20, 2018 at 1:02 PM Roger Bivand <[hidden email]>

>>> wrote:

>>>>>

>>>>>> On Thu, 20 Sep 2018, Vijay Lulla wrote:

>>>>>>

>>>>>>> Dear list members,

>>>>>>> A few years ago Roger Bivand posted a discussion (

>>>>>>> https://stat.ethz.ch/pipermail/r-sig-geo/2015-August/023204.html )

>>>>>>> about

>>>>>>> consolidating SRS definitions into a SQLite database and I am

>>> wondering

>>>>>> if

>>>>>>> there has been any development along those lines.

>>>>>>

>>>>>> Rather than trying this just within R, we're hoping that the GDAL

>>>>>> barn-raising effort:

>>>>>>

>>>>>> https://gdalbarn.com/

>>>>>>

>>>>>> will take us there and further, and be much better than having a

>>>>>> non-standard implementation.

>>>>>>

>>>>>> When that effort is done, we'll be open for ideas about interfacing it

>>>>>> through PROJ and GDAL, which now ship with CSV files that we copy into

>>>>>> Windows and MacOS binary packages (rgdal, sf, lwgeom).

>>>>>>

>>>>>> For now, if it helps, rgdal::make_EPSG() reads the EPSG CSV file

>>> shipped

>>>>>> with PROJ into the R workspace as a data.frame.

>>>>>>

>>>>>> Roger

>>>>>>

>>>>>>> Specifically, is there any consolidated collection of SRS

>>>>>>> definitions in

>>>>>>> R (either a data.frame or tibble or SQLite backed) that are being used

>>>>>>> by geospatial packages that users can use too? If so, can you please

>>>>>>> point me to it? If not, would it be worthwhile to have this

>>>>>>> consolidated list/dataframe, maybe as part of data for one of the core

>>>>>>> geospatial packages? Thanks in advance, Vijay

>>>>>>>

>>>>>>> [[alternative HTML version deleted]]

>>>>>>>

>>>>>>> _______________________________________________

>>>>>>> R-sig-Geo mailing list

>>>>>>> [hidden email]

>>>>>>> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>>>>>>>

>>>>>>

>>>>>> --

>>>>>> Roger Bivand

>>>>>> Department of Economics, Norwegian School of Economics,

>>>>>> Helleveien 30, N-5045 Bergen, Norway.

>>>>>> voice: +47 55 95 93 55; e-mail: [hidden email]

>>>>>> http://orcid.org/0000-0003-2392-6140

>>>>>> https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

>>>>>>

>>>>>

>>>>>

>>>>>

>>>>

>>>

>>>

>>

>

>

--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

http://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

https://erouault.blogspot.com/2018/09/srs-barn-raising-4th-report.html

gives the status on the proposed generic db to ship with PROJ and be used

by software using PROJ.

Roger

On Sat, 22 Sep 2018, Alex Mandel wrote:

> Though it requires Internet what about hitting the epsg.io API described

> on https://github.com/klokantech/epsg.io

>

> Thanks,

> Alex

>

> On 09/22/2018 05:23 AM, Vijay Lulla wrote:

>> Alex,

>> Thanks for QGIS's srs.db! I wasn't aware of it.

>>

>> I currently use the spatial_ref_sys from PostGIS to create a SRS data frame

>> but plan to use rgdal::make_EPSG more often. The reason I brought this up

>> is because on the lab computers (cannot install stuff on it) where I teach

>> there is no PostGIS and I didn't know how to lookup EPSG codes for various

>> SRS definitions from within R. I hadn't thought of Spatialite metadata as

>> a viable alternative. So, thanks for that. I will look into it and most

>> likely use it in conjunction with make_EPSG!

>>

>> Finally, I am really looking forward to the consolidated SRS database from

>> the gdal barn-raising effort! This consolidated database will be

>> invaluable, and of great aid/service, to the geospatial community, IMO.

>> Thanks,

>> Vijay.

>>

>> On Sat, Sep 22, 2018 at 1:22 AM Alex Mandel <[hidden email]>

>> wrote:

>>

>>> QGIS makes one

>>> https://github.com/qgis/QGIS/blob/master/resources/srs.db

>>> There's some script in the build that updates it also, not without issue:

>>> https://issues.qgis.org/issues/17993

>>>

>>> I suppose you could also dump out how PostGIS does it to Sqlite, or use

>>> the Spatialite metadata table.

>>> https://www.gaia-gis.it/gaia-sins/spatialite-cookbook/html/metadata.html

>>>

>>> But the thread mentioned that goes back to the MetaCRS mailing list is

>>> probably the right place in the community to revive the discussion.

>>>

>>> Seems like something to encourage, and a good topic for an OSGeo

>>> sponsored sprint.

>>>

>>> Thanks,

>>> Alex

>>>

>>> On 09/21/2018 12:32 AM, Roger Bivand wrote:

>>>> On Thu, 20 Sep 2018, Vijay Lulla wrote:

>>>>

>>>>> Ok, thanks! While the page provided information about the project and

>>>>> its

>>>>> funding status I couldn't find the SQLite database. Do you happen to

>>>>> know

>>>>> when this will be available?

>>>>

>>>> No more than is on that page, plus the time needed to re-write plenty of

>>>> sf, lwgeom, rgdal and sp. At that stage, contributions welcome!

>>>>

>>>> Roger

>>>>

>>>>>

>>>>> On Thu, Sep 20, 2018 at 1:02 PM Roger Bivand <[hidden email]>

>>> wrote:

>>>>>

>>>>>> On Thu, 20 Sep 2018, Vijay Lulla wrote:

>>>>>>

>>>>>>> Dear list members,

>>>>>>> A few years ago Roger Bivand posted a discussion (

>>>>>>> https://stat.ethz.ch/pipermail/r-sig-geo/2015-August/023204.html )

>>>>>>> about

>>>>>>> consolidating SRS definitions into a SQLite database and I am

>>> wondering

>>>>>> if

>>>>>>> there has been any development along those lines.

>>>>>>

>>>>>> Rather than trying this just within R, we're hoping that the GDAL

>>>>>> barn-raising effort:

>>>>>>

>>>>>> https://gdalbarn.com/

>>>>>>

>>>>>> will take us there and further, and be much better than having a

>>>>>> non-standard implementation.

>>>>>>

>>>>>> When that effort is done, we'll be open for ideas about interfacing it

>>>>>> through PROJ and GDAL, which now ship with CSV files that we copy into

>>>>>> Windows and MacOS binary packages (rgdal, sf, lwgeom).

>>>>>>

>>>>>> For now, if it helps, rgdal::make_EPSG() reads the EPSG CSV file

>>> shipped

>>>>>> with PROJ into the R workspace as a data.frame.

>>>>>>

>>>>>> Roger

>>>>>>

>>>>>>> Specifically, is there any consolidated collection of SRS

>>>>>>> definitions in

>>>>>>> R (either a data.frame or tibble or SQLite backed) that are being used

>>>>>>> by geospatial packages that users can use too? If so, can you please

>>>>>>> point me to it? If not, would it be worthwhile to have this

>>>>>>> consolidated list/dataframe, maybe as part of data for one of the core

>>>>>>> geospatial packages? Thanks in advance, Vijay

>>>>>>>

>>>>>>> [[alternative HTML version deleted]]

>>>>>>>

>>>>>>> _______________________________________________

>>>>>>> R-sig-Geo mailing list

>>>>>>> [hidden email]

>>>>>>> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>>>>>>>

>>>>>>

>>>>>> --

>>>>>> Roger Bivand

>>>>>> Department of Economics, Norwegian School of Economics,

>>>>>> Helleveien 30, N-5045 Bergen, Norway.

>>>>>> voice: +47 55 95 93 55; e-mail: [hidden email]

>>>>>> http://orcid.org/0000-0003-2392-6140

>>>>>> https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

>>>>>>

>>>>>

>>>>>

>>>>>

>>>>

>>>

>>>

>>

>

>

--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

http://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

### Linear referencing in R

Hi, people! I hope I'll be not to speculative in my question and that you

can comprehend my problem.

Well, I have a railroad shapefile and I have two csv files corresponding to

the topography in each side of the railroad. The csv are organized in this

way:

ID_topo, from_km, to_km, height

1, 0, 1.91, 15

2, 1.91, 2.23, -3

I created a point shapefile for each meter of the railroad and then I

proceeded with a join (not spatial) between the from_km of the topography

data frame and the km mark (km_calc) from the point railroad.

My goal is to create shapefiles for each of the railroadside topography.

The issue is that when I try to make a new line shapefile from topography

based on the points of the railroad, what I get is a line that has doesn't

follow the curvature of the railroad. Using the example of the csv above,

what I get are several straight lines linked by the from_km to to_km. (Here

is a link to a print screen from QGIS:

https://www.dropbox.com/s/erfsst8pasoqj64/Captura%20de%20tela%202018-09-28%2019.49.47.png?dl=0

<http://Image>)

Maybe it is a little bit difficult to me to explain exactly the problem,

but any doubts, please ask. The files and the script I'm using are on the

following github: https://github.com/rdornas/raileco

Thank you very much indeed in advance!

*Rubem A. P. Dornas*

Celular: (31) 99642-5102

PPG Análise e Modelagem de Sistemas Ambientais

Instituto de Geociências - Universidade Federal de Minas Gerais

Currículo Lattes: http://lattes.cnpq.br/7197154832267712

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

can comprehend my problem.

Well, I have a railroad shapefile and I have two csv files corresponding to

the topography in each side of the railroad. The csv are organized in this

way:

ID_topo, from_km, to_km, height

1, 0, 1.91, 15

2, 1.91, 2.23, -3

I created a point shapefile for each meter of the railroad and then I

proceeded with a join (not spatial) between the from_km of the topography

data frame and the km mark (km_calc) from the point railroad.

My goal is to create shapefiles for each of the railroadside topography.

The issue is that when I try to make a new line shapefile from topography

based on the points of the railroad, what I get is a line that has doesn't

follow the curvature of the railroad. Using the example of the csv above,

what I get are several straight lines linked by the from_km to to_km. (Here

is a link to a print screen from QGIS:

https://www.dropbox.com/s/erfsst8pasoqj64/Captura%20de%20tela%202018-09-28%2019.49.47.png?dl=0

<http://Image>)

Maybe it is a little bit difficult to me to explain exactly the problem,

but any doubts, please ask. The files and the script I'm using are on the

following github: https://github.com/rdornas/raileco

Thank you very much indeed in advance!

*Rubem A. P. Dornas*

Celular: (31) 99642-5102

PPG Análise e Modelagem de Sistemas Ambientais

Instituto de Geociências - Universidade Federal de Minas Gerais

Currículo Lattes: http://lattes.cnpq.br/7197154832267712

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### spgm

Dear all,

I would like to ask if there is a possibility to apply something similiar to the "impacts" from spdep package for SAR regressions using the spgm function from the splm package.

Best regards,

Martin Hul�nyi ?

[eco.jpg] Pred vytla�en�m tohto mailu zv�te pros�m vplyv na �ivotn� prostredie. �akujeme.

Please consider the environment before printing this e-mail. Thanks

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

I would like to ask if there is a possibility to apply something similiar to the "impacts" from spdep package for SAR regressions using the spgm function from the splm package.

Best regards,

Martin Hul�nyi ?

[eco.jpg] Pred vytla�en�m tohto mailu zv�te pros�m vplyv na �ivotn� prostredie. �akujeme.

Please consider the environment before printing this e-mail. Thanks

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: Question about HSAR package

Ok, it's helpful to know that I need to zoom in on those three things.

I created the Random Effects Matrix by hand in Excel, so I read that into R

and put the matrix into the format recommended here:

https://cran.r-project.org/web/packages/HSAR/vignettes/hsar.html

Below, I show how I made W, M, and Delta.mat, before I try to estimate the

model. Hopefully this helps.

constit<- readShapeSpatial("Population

Weighted/Constituencies_2008/20170209_Constit")

constit.nb<- poly2nb(constit, row.names = constit$X20160526_5)

ghana.constit.weights.binary<- nb2listw(constit.nb, style="B", zero.policy

= TRUE)

W.constit<- listw2mat(ghana.constit.weights.binary)

W.constit <- W.constit / rowSums(W.constit)

W.constit <- as(W.constit,"dgCMatrix")

dist2008<- readShapeSpatial("Population Weighted/Districts_2008/Volta

Variable/20170226_Districts")

dist2008.nb<- poly2nb(dist2008, row.names = dist2008$DIST_2008)

ghana.dist2008.weights.binary<- nb2listw(dist2008.nb, style="B",

zero.policy=T)

W.dist<- listw2mat(ghana.dist2008.weights.binary)

W.dist <- W.dist / rowSums(W.dist)

W.dist <- as(W.dist,"dgCMatrix")

Delta<- read.csv("Random Effects Matrix_Ghana.csv",

header = T, row.names = 1)

Delta.mat<- as.matrix(Delta)

Delta.mat <- as(Delta.mat,"dgCMatrix")

> HSAR.model1<- hsar(Count_ ~ ndc_pres_3

+ + volatility + turnout_21

+ + volatili_1 + X20160526_6

+ + DENSITY_RD + Count_3

+ + MEAN + pov_p_2008

+ + gini_2008 + ferat_2008

+ + Count_4 + literacy

+ + grid_perCa, data=constit, W=W.constit,

+ M=W.dist, Delta = Delta.mat,

+ burnin = 5000, Nsim = 10000,

+ thinning = 1, parameters.start = NULL)

Error in hsar(Count_ ~ ndc_pres_3 + volatility + turnout_21 + volatili_1 +

:

not an S4 object

On Thu, Sep 27, 2018 at 3:57 PM Roger Bivand <[hidden email]> wrote:

> This code tells nothing, the problem is in your construction of W, M

> and/or Delta. Pleaseng show this code too, best as a reproducible example.

> Tip: sometimes running traceback() after an error shows where it happens.

>

> Roger Bivand

> Norwegian School of Economics

> Bergen, Norway

>

> Fra: Justin Schon

> Sendt: torsdag 27. september, 21.36

> Emne: [R-sig-Geo] Question about HSAR package

> Til: [hidden email]

>

>

> Dear all, I am receiving the error "not an S4 object" when I attempt to

> estimate the hierarchal spatial auto-regressive model from the HSAR

> package. I have attempted several ways of creating the lower level matrix

> and higher level matrix. Rather than asking if members of this list can

> help with the code, I am first wondering if anyone can explain why this

> error would appear. I am including the code that estimates the model, as

> well as the error, below: > HSAR.model1

>

--

Justin Schon

Post-Doctoral Researcher on Environmental Change and Migration

MURI Migration Research Team: http://murimigration.org/

University of Florida

Fellow, Initiative for Sustainable Energy Policy (ISEP)

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

I created the Random Effects Matrix by hand in Excel, so I read that into R

and put the matrix into the format recommended here:

https://cran.r-project.org/web/packages/HSAR/vignettes/hsar.html

Below, I show how I made W, M, and Delta.mat, before I try to estimate the

model. Hopefully this helps.

constit<- readShapeSpatial("Population

Weighted/Constituencies_2008/20170209_Constit")

constit.nb<- poly2nb(constit, row.names = constit$X20160526_5)

ghana.constit.weights.binary<- nb2listw(constit.nb, style="B", zero.policy

= TRUE)

W.constit<- listw2mat(ghana.constit.weights.binary)

W.constit <- W.constit / rowSums(W.constit)

W.constit <- as(W.constit,"dgCMatrix")

dist2008<- readShapeSpatial("Population Weighted/Districts_2008/Volta

Variable/20170226_Districts")

dist2008.nb<- poly2nb(dist2008, row.names = dist2008$DIST_2008)

ghana.dist2008.weights.binary<- nb2listw(dist2008.nb, style="B",

zero.policy=T)

W.dist<- listw2mat(ghana.dist2008.weights.binary)

W.dist <- W.dist / rowSums(W.dist)

W.dist <- as(W.dist,"dgCMatrix")

Delta<- read.csv("Random Effects Matrix_Ghana.csv",

header = T, row.names = 1)

Delta.mat<- as.matrix(Delta)

Delta.mat <- as(Delta.mat,"dgCMatrix")

> HSAR.model1<- hsar(Count_ ~ ndc_pres_3

+ + volatility + turnout_21

+ + volatili_1 + X20160526_6

+ + DENSITY_RD + Count_3

+ + MEAN + pov_p_2008

+ + gini_2008 + ferat_2008

+ + Count_4 + literacy

+ + grid_perCa, data=constit, W=W.constit,

+ M=W.dist, Delta = Delta.mat,

+ burnin = 5000, Nsim = 10000,

+ thinning = 1, parameters.start = NULL)

Error in hsar(Count_ ~ ndc_pres_3 + volatility + turnout_21 + volatili_1 +

:

not an S4 object

On Thu, Sep 27, 2018 at 3:57 PM Roger Bivand <[hidden email]> wrote:

> This code tells nothing, the problem is in your construction of W, M

> and/or Delta. Pleaseng show this code too, best as a reproducible example.

> Tip: sometimes running traceback() after an error shows where it happens.

>

> Roger Bivand

> Norwegian School of Economics

> Bergen, Norway

>

> Fra: Justin Schon

> Sendt: torsdag 27. september, 21.36

> Emne: [R-sig-Geo] Question about HSAR package

> Til: [hidden email]

>

>

> Dear all, I am receiving the error "not an S4 object" when I attempt to

> estimate the hierarchal spatial auto-regressive model from the HSAR

> package. I have attempted several ways of creating the lower level matrix

> and higher level matrix. Rather than asking if members of this list can

> help with the code, I am first wondering if anyone can explain why this

> error would appear. I am including the code that estimates the model, as

> well as the error, below: > HSAR.model1

>

--

Justin Schon

Post-Doctoral Researcher on Environmental Change and Migration

MURI Migration Research Team: http://murimigration.org/

University of Florida

Fellow, Initiative for Sustainable Energy Policy (ISEP)

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: Question about HSAR package

This code tells nothing, the problem is in your construction of W, M and/or Delta. Pleaseng show this code too, best as a reproducible example. Tip: sometimes running traceback() after an error shows where it happens.

Roger Bivand

Norwegian School of Economics

Bergen, Norway

Fra: Justin Schon

Sendt: torsdag 27. september, 21.36

Emne: [R-sig-Geo] Question about HSAR package

Til: [hidden email]

Dear all, I am receiving the error "not an S4 object" when I attempt to estimate the hierarchal spatial auto-regressive model from the HSAR package. I have attempted several ways of creating the lower level matrix and higher level matrix. Rather than asking if members of this list can help with the code, I am first wondering if anyone can explain why this error would appear. I am including the code that estimates the model, as well as the error, below: > HSAR.model1

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

Roger Bivand

Norwegian School of Economics

Bergen, Norway

Fra: Justin Schon

Sendt: torsdag 27. september, 21.36

Emne: [R-sig-Geo] Question about HSAR package

Til: [hidden email]

Dear all, I am receiving the error "not an S4 object" when I attempt to estimate the hierarchal spatial auto-regressive model from the HSAR package. I have attempted several ways of creating the lower level matrix and higher level matrix. Rather than asking if members of this list can help with the code, I am first wondering if anyone can explain why this error would appear. I am including the code that estimates the model, as well as the error, below: > HSAR.model1

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

### Question about HSAR package

Dear all,

I am receiving the error "not an S4 object" when I attempt to estimate the

hierarchal spatial auto-regressive model from the HSAR package. I have

attempted several ways of creating the lower level matrix and higher level

matrix. Rather than asking if members of this list can help with the code,

I am first wondering if anyone can explain why this error would appear.

I am including the code that estimates the model, as well as the error,

below:

> HSAR.model1<- hsar(Count_ ~ ndc_pres_3

+ + volatility + turnout_21

+ + volatili_1 + X20160526_6

+ + DENSITY_RD + Count_3

+ + MEAN + pov_p_2008

+ + gini_2008 + ferat_2008

+ + Count_4 + literacy

+ + grid_perCa, data=constit, W=W.constit,

+ M=W.dist, Delta = Delta.mat,

+ burnin = 5000, Nsim = 10000,

+ thinning = 1, parameters.start = NULL)

Error in hsar(Count_ ~ ndc_pres_3 + volatility + turnout_21 + volatili_1 +

:

not an S4 object

Again, I am not looking for advice with the code right now. I am wondering

what kinds of problems could cause this error message.

As a note, I receive the same error message when I try to estimate an sar

model and when I simplify the model down to one independent variable.

I greatly appreciate any ideas that members of this list might have.

Thank you,

Justin

--

Justin Schon

Post-Doctoral Researcher on Environmental Change and Migration

MURI Migration Research Team: http://murimigration.org/

University of Florida

Fellow, Initiative for Sustainable Energy Policy (ISEP)

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

I am receiving the error "not an S4 object" when I attempt to estimate the

hierarchal spatial auto-regressive model from the HSAR package. I have

attempted several ways of creating the lower level matrix and higher level

matrix. Rather than asking if members of this list can help with the code,

I am first wondering if anyone can explain why this error would appear.

I am including the code that estimates the model, as well as the error,

below:

> HSAR.model1<- hsar(Count_ ~ ndc_pres_3

+ + volatility + turnout_21

+ + volatili_1 + X20160526_6

+ + DENSITY_RD + Count_3

+ + MEAN + pov_p_2008

+ + gini_2008 + ferat_2008

+ + Count_4 + literacy

+ + grid_perCa, data=constit, W=W.constit,

+ M=W.dist, Delta = Delta.mat,

+ burnin = 5000, Nsim = 10000,

+ thinning = 1, parameters.start = NULL)

Error in hsar(Count_ ~ ndc_pres_3 + volatility + turnout_21 + volatili_1 +

:

not an S4 object

Again, I am not looking for advice with the code right now. I am wondering

what kinds of problems could cause this error message.

As a note, I receive the same error message when I try to estimate an sar

model and when I simplify the model down to one independent variable.

I greatly appreciate any ideas that members of this list might have.

Thank you,

Justin

--

Justin Schon

Post-Doctoral Researcher on Environmental Change and Migration

MURI Migration Research Team: http://murimigration.org/

University of Florida

Fellow, Initiative for Sustainable Energy Policy (ISEP)

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo