We simulate \(10\) independent realisations (surfaces) from a zero-mean GP with a Matern \((\nu=3/2)\) covariance function. Each observed surface has a sample size of \(30 \times 30 = 900\) points on \([0,1]^2\).
set.seed(123)
nrep <- 10
n1 <- 30
n2 <- 30
n <- n1*n2
input1 <- seq(0,1,len=n1)
input2 <- seq(0,1,len=n2)
input <- as.matrix(expand.grid(input1=input1, input2=input2))
hp <- list('matern.v'=log(2),'matern.w'=c(log(20), log(25)),'vv'=log(0.2))
nu <- 1.5
Sigma <- cov.matern(hyper = hp, input = input, nu = nu) + diag(exp(hp$vv), n)
Y <- t(mvrnorm(n=nrep, mu=rep(0,n), Sigma=Sigma))
We now split the dataset into training and test sets, leaving about 80% of the observations for the test set.
idx <- expand.grid(1:n1, 1:n2)
n1test <- floor(n1*0.8)
n2test <- floor(n2*0.8)
idx1 <- sort(sample(1:n1, n1test))
idx2 <- sort(sample(1:n2, n2test))
whichTest <- idx[,1]%in%idx1 & idx[,2]%in%idx2
inputTest <- input[whichTest, ]
Ytest <- Y[whichTest, ]
inputTrain <- input[!whichTest, ]
Ytrain <- Y[!whichTest, ]
Estimation of the GPR model is done by:
fit <- gpr(input=inputTrain, response=Ytrain, Cov='matern', trace=4, useGradient=T,
iter.max=50, nu=nu, nInitCandidates=50)
#>
#> --------- Initialising ----------
#> iter: -loglik: matern.v matern.w1 matern.w2 vv
#> 0: 5368.8149: 4.36887 8.17782 0.587245 -0.792392
#> 4: 3899.6108: 0.403812 7.29654 3.05398 -1.81042
#> 8: 3128.8965: -0.0888745 4.21497 4.13713 -2.00594
#> 12: 3043.2620: 0.342497 3.45996 3.47264 -1.71053
#> 16: 3039.2156: 0.499630 3.22883 3.36662 -1.59966
#> 20: 3038.2633: 0.526918 3.20605 3.31268 -1.63734
#> 24: 3038.2241: 0.550658 3.17911 3.28203 -1.63260
#>
#> optimization finished.
The hyperparameter estimates are: