aaronmarkham commented on a change in pull request #12664: [MXNET-637] 
Multidimensional LSTM example for MXNetR
URL: https://github.com/apache/incubator-mxnet/pull/12664#discussion_r220392030
 
 

 ##########
 File path: R-package/vignettes/MultidimLstm.Rmd
 ##########
 @@ -0,0 +1,358 @@
+LSTM time series example
+=============================================
+
+This tutorial shows how to use an LSTM model with multivariate data, and 
generate predictions from it. For demonstration purposes, we used an opensource 
pollution data. You can find the data on 
[GitHub](https://github.com/dmlc/web-data/tree/master/mxnet/tinyshakespeare).
+The tutorial is an illustration of how to use LSTM models with MXNetR. We are 
forecasting the air pollution with data recorded at the US embassy in Beijing, 
China for five years.
+
+Dataset Attribution:
+"PM2.5 data of US Embassy in Beijing" 
(https://archive.ics.uci.edu/ml/datasets/Beijing+PM2.5+Data)
+We want to predict pollution levels(PM2.5 concentration) in the city given the 
above dataset.
+
+```r
+Dataset description:
+No: row number
+year: year of data in this row
+month: month of data in this row
+day: day of data in this row
+hour: hour of data in this row
+pm2.5: PM2.5 concentration
+DEWP: Dew Point
+TEMP: Temperature
+PRES: Pressure
+cbwd: Combined wind direction
+Iws: Cumulated wind speed
+Is: Cumulated hours of snow
+Ir: Cumulated hours of rain
+```
+
+We use past PM2.5 concentration, dew point, temperature, pressure, wind speed, 
snow and rain to predict
+PM2.5 concentration levels
+
+Load  and pre-process the Data
+---------
+Load in the data and preprocess it. It is assumed that the data has been 
downloaded in as csv file 'data.csv' locally.
+
+ ```r
+    ## Loading required packages
+    library("readr")
+    library("dplyr")
+    library("mxnet")
+    library("abind")
+ ```
+
+
+
+ ```r
+
+    mx.set.seed(1234)
+    ## Preprocessing steps
+
+    Data <- read.csv(file="data.csv", header=TRUE, sep=",")
+
+    ## Extracting specific features from the dataset as variables for time 
series
+    ## We extract pollution, temperature, pressue, windspeed, snowfall and 
rainfall information from dataset
+
+    df <- data.frame(Data$pm2.5, Data$DEWP,Data$TEMP, Data$PRES, Data$Iws, 
Data$Is, Data$Ir)
+    df[is.na(df)] <- 0
+
+    ## Now we normalise each of the feature set to a range(0,1)
+    df <- matrix(as.matrix(df),ncol=ncol(df),dimnames=NULL)
+    rangenorm <- function(x){(x-min(x))/(max(x)-min(x))}
+    df <- apply(df,2, rangenorm)
+    df <- t(df)
+  ```
+For using multidimesional data with MXNetR. We need to convert training data 
to the form
+(n_dim x seq_len x num_samples) and label should be of the form (seq_len x 
num_samples) or (1 x num_samples)
+depending on the LSTM flavour to be used(one-to-one/ many-to-one). Please note 
that MXNetR currently supports only these two flavours of RNN.
+We have used n_dim = 7, seq_len = 100  and num_samples = 430.
+
+```r
+n_dim <- 7
+seq_len <- 100
+num_samples <- 430
+## extract only required data from dataset
+trX<- df[1:n_dim, 25:(24+(seq_len * num_samples))]
+## the label data(next PM2.5 concentration) should be one time step ahead of 
the current PM2.5 concentration
+
+trY<- df[1,26:(25+(seqlen*num_samples))]
+## reshape the matrices in the format acceptable by MXNetR RNNs
+trainX <- trX
+dim(trainX) <- c(n_dim, seq_len, num_samples)
+trainY <- trY
+dim(trainY) <- c(seq_len, num_samples)
+
+```
+
+
+
+Defining and training the network
+---------
+
+```r
+batch.size <- 32
+# take first 300 samples for train - remaining 100 for evaluation
+train_ids <- 1:300
+eval_ids<- 301:400
+
+## create dataiterators
+train.data <- mx.io.arrayiter(data = trainX[,,train_ids, drop = F], label = 
trainY[, train_ids],
+                              batch.size = batch.size, shuffle = TRUE)
+
+eval.data <- mx.io.arrayiter(data = trainX[,,eval_ids, drop = F], label = 
trainY[, eval_ids],
+                              batch.size = batch.size, shuffle = FALSE)
+
+## Create the symbol for RNN
+symbol <- rnn.graph(num_rnn_layer =  2,
+                    num_hidden = 50,
+                    input_size = NULL,
+                    num_embed = NULL,
+                    num_decode = 1,
+                    masking = F,
+                    loss_output = "linear",
+                    dropout = 0.2,
+                    ignore_label = -1,
+                    cell_type = "lstm",
+                    output_last_state = T,
+                    config = "one-to-one")
+
+
+
+mx.metric.mse.seq <- mx.metric.custom("MSE", function(label, pred) {
+                      label = mx.nd.reshape(label, shape = -1)
+                      pred = mx.nd.reshape(pred, shape = -1)
+                      res <- mx.nd.mean(mx.nd.square(label-pred))
+                      return(as.array(res))
+                      })
+
+
+
+ctx <- mx.cpu()
+
+initializer <- mx.init.Xavier(rnd_type = "gaussian",
+                              factor_type = "avg",
+                              magnitude = 3)
+
+optimizer <- mx.opt.create("adadelta", rho = 0.9, eps = 1e-5, wd = 1e-6,
+                                                         clip_gradient = 1, 
rescale.grad = 1/batch.size)
+
+logger <- mx.metric.logger()
+epoch.end.callback <- mx.callback.log.train.metric(period = 10, logger = 
logger)
+
+## train the network
+system.time(
+  model <- mx.model.buckets(symbol = symbol,
+                            train.data = train.data,
+                            eval.data = eval.data,
+                            num.round = 50, ctx = ctx, verbose = TRUE,
+                            metric = mx.metric.mse.seq,
+                            initializer = initializer, optimizer = optimizer,
+                            batch.end.callback = NULL,
+                            epoch.end.callback = epoch.end.callback)
+)
+
+```
+Output:
+```
+Start training with 1 devices
+[1] Train-MSE=0.0175756292417645
+[1] Validation-MSE=0.0108831799589097
+[2] Train-MSE=0.0116676720790565
+[2] Validation-MSE=0.00835292390547693
+[3] Train-MSE=0.0103536401875317
+[3] Validation-MSE=0.00770004198420793
+[4] Train-MSE=0.00992695298045874
+[4] Validation-MSE=0.00748429435770959
+[5] Train-MSE=0.00970045481808484
+[5] Validation-MSE=0.00734121853020042
+[6] Train-MSE=0.00956926480866969
+[6] Validation-MSE=0.00723317882511765
+[7] Train-MSE=0.00946674752049148
+[7] Validation-MSE=0.00715298682916909
+[8] Train-MSE=0.00936337062157691
+[8] Validation-MSE=0.00708933407440782
+[9] Train-MSE=0.00928824483416974
+[9] Validation-MSE=0.00702768098562956
+[10] Train-MSE=0.00921900537796319
+[10] Validation-MSE=0.00698263343656436
+[11] Train-MSE=0.00915476991795003
+[11] Validation-MSE=0.00694422319065779
+[12] Train-MSE=0.00911224479787052
+[12] Validation-MSE=0.00691420421935618
+[13] Train-MSE=0.0090605927631259
+[13] Validation-MSE=0.00686828832840547
+[14] Train-MSE=0.00901446407660842
+[14] Validation-MSE=0.00685080053517595
+[15] Train-MSE=0.0089907712303102
+[15] Validation-MSE=0.00681731867371127
+[16] Train-MSE=0.00894410968758166
+[16] Validation-MSE=0.00680519745219499
+[17] Train-MSE=0.00891360901296139
+[17] Validation-MSE=0.00678778381552547
+[18] Train-MSE=0.00887094167992473
+[18] Validation-MSE=0.00675358629086986
+[19] Train-MSE=0.00885531790554523
+[19] Validation-MSE=0.00676276802551001
+[20] Train-MSE=0.0088208335917443
+[20] Validation-MSE=0.00674056768184528
+[21] Train-MSE=0.00880425171926618
+[21] Validation-MSE=0.00673307734541595
+[22] Train-MSE=0.00879250690340996
+[22] Validation-MSE=0.00670740590430796
+[23] Train-MSE=0.00875497269444168
+[23] Validation-MSE=0.00668720051180571
+[24] Train-MSE=0.00873568719252944
+[24] Validation-MSE=0.00669587979791686
+[25] Train-MSE=0.00874641905538738
+[25] Validation-MSE=0.00669469079002738
+[26] Train-MSE=0.008697918523103
+[26] Validation-MSE=0.00669995549833402
+[27] Train-MSE=0.00869045881554484
+[27] Validation-MSE=0.00670569541398436
+[28] Train-MSE=0.00865633632056415
+[28] Validation-MSE=0.00670662586344406
+[29] Train-MSE=0.00868522766977549
+[29] Validation-MSE=0.00668792036594823
+[30] Train-MSE=0.0086129839066416
+[30] Validation-MSE=0.00667576276464388
+[31] Train-MSE=0.0086337742395699
+[31] Validation-MSE=0.0067121529718861
+[32] Train-MSE=0.00863495240919292
+[32] Validation-MSE=0.0067587440717034
+[33] Train-MSE=0.00863885483704507
+[33] Validation-MSE=0.00670913810608909
+[34] Train-MSE=0.00858410224318504
+[34] Validation-MSE=0.00674143311334774
+[35] Train-MSE=0.00860943677835166
+[35] Validation-MSE=0.00671671854797751
+[36] Train-MSE=0.00857279957272112
+[36] Validation-MSE=0.00672605860745534
+[37] Train-MSE=0.00857790051959455
+[37] Validation-MSE=0.00671195174800232
+[38] Train-MSE=0.00856402018107474
+[38] Validation-MSE=0.00670708599500358
+[39] Train-MSE=0.00855070641264319
+[39] Validation-MSE=0.00669713690876961
+[40] Train-MSE=0.00855873627588153
+[40] Validation-MSE=0.00669847876997665
+[41] Train-MSE=0.00854103988967836
+[41] Validation-MSE=0.00672988337464631
+[42] Train-MSE=0.00854658158496022
+[42] Validation-MSE=0.0067430961644277
+[43] Train-MSE=0.00850498480722308
+[43] Validation-MSE=0.00670209160307422
+[44] Train-MSE=0.00847653122618794
+[44] Validation-MSE=0.00672520510852337
+[45] Train-MSE=0.00853331410326064
+[45] Validation-MSE=0.0066903488477692
+[46] Train-MSE=0.0084140149410814
+[46] Validation-MSE=0.00665930815739557
+[47] Train-MSE=0.00842269244603813
+[47] Validation-MSE=0.00667664298089221
+[48] Train-MSE=0.00844420134089887
+[48] Validation-MSE=0.00665349006885663
+[49] Train-MSE=0.00839704093523324
+[49] Validation-MSE=0.00666191370692104
+[50] Train-MSE=0.00840363306924701
+[50] Validation-MSE=0.00664306507678702
+   user  system elapsed
+ 66.782   6.229  39.745
+
+```
+
+
+Inference on the network
+---------
+Now we have trained the network. Let's use it for inference
+
+```r
+ctx <- mx.cpu()
+
+## We extract the state symbols for RNN
+
+internals <- model$symbol$get.internals()
+sym_state <- internals$get.output(which(internals$outputs %in% "RNN_state"))
+sym_state_cell <- internals$get.output(which(internals$outputs %in% 
"RNN_state_cell"))
+sym_output <- internals$get.output(which(internals$outputs %in% "loss_output"))
+symbol <- mx.symbol.Group(sym_output, sym_state, sym_state_cell)
+
+## We will predict 100 timestamps for 401stsamples since it was not used in 
training
+pred_length = 100
+predict <- numeric()
+
+## We pass the 400th sample through the network to get the weights and use it 
for predicting next 100 time stamps.
+data = mx.nd.array(trainX[, , 400, drop = F])
 
 Review comment:
   maybe I missed this but I'm not sure what's going on with the labels...

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to