ラベル CLI の投稿を表示しています。 すべての投稿を表示
ラベル CLI の投稿を表示しています。 すべての投稿を表示

2020年4月17日金曜日

CLIs for March 2020 : News Release



08/04/2020 - The CLIs for March 2020 recorded the largest drop on record in most major economies in line with the considerable economic shock caused by the COVID-19 pandemic and its immediate impact on production, consumption and confidence in the wake of lockdown measures.

Over the next few months, in particular, care will be needed in interpreting the CLI.

Firstly, with considerable uncertainty around the duration of lockdown measures, the ability of leading indicators to predict future movements in the business cycle has been severely curtailed: current estimates of the CLI are able to provide meaningful signals on current movements in activity, and should therefore be viewed as coincident rather than leading.

Secondly, as always, the magnitude of the CLI decline should not be regarded as a measure of the degree of contraction in economic activity, rather it should be viewed as an indication of the strength of the signal that economies have entered a phase of contraction. For comparison, the signal is stronger now than it was at the time of the Financial Crisis.

Thirdly, the CLIs are not yet able to anticipate the end of the slowdown, especially as it is not yet clear how long, nor indeed severe, lock-down measures are likely to be. However, as the situation settles, even with a more prolonged lockdown, the CLI will begin to recover its ability to predict as firms and consumers begin to adapt to new (even if only short-term) realities, especially as governments begin to formulate and provide signals around longer term strategies, beyond the initial immediate measures they have had to impose.

Download Entire Release

2019年11月14日木曜日

Download composite leading indicator full data and load them into the R environment CLI

# go to OECD, and download 'Full Indicator data(.csv)".
# change the filename to CLI3.csv upon download.
# go to the directory where the downloaded csv exists.
# run the script in git repo. for example as the below.

$~/R/R2/index/cli_download.sed

# there will be

$ls -l *.csv
-rw-r--r--@ 1 honomoto  staff  1283086 11 14 09:50 CLI3.csv
-rw-r--r--  1 honomoto  staff     7105 11 14 09:50 chn.csv
-rw-r--r--  1 honomoto  staff    12848 11 14 09:50 ea19.csv
-rw-r--r--  1 honomoto  staff    13923 11 14 09:50 oecd.csv
-rw-r--r--  1 honomoto  staff    15465 11 14 09:50 usa.csv

# run cli_download.


cli_xts <- merge(as.xts(read.zoo(read.csv("~/Downloads/oecd.csv"))),
as.xts(read.zoo(read.csv("~/Downloads/usa.csv"))),
as.xts(read.zoo(read.csv("~/Downloads/chn.csv"))),
as.xts(read.zoo(read.csv("~/Downloads/ea19.csv"))),
suffixes = c("oecd","usa","china","ea19"))

# draw graphs

par(mfrow=c(4,1))
plot(diff(cli_xts$oecd)["2011::"],type='p',pwd=1,pch='+')
plot(diff(cli_xts$usa)["2011::"],type='p',pwd=1,pch='+')
plot(diff(cli_xts$ea19)["2011::"],type='p',pwd=1,pch='+')
plot(diff(cli_xts$china)["2011::"],type='p',pwd=1,pch='+')

par(mfrow=c(1,1))

2019年11月13日水曜日

Download Composite Leading Indicator from OECD site. Early Bird Version


# go to OECD site and download CSV from Export menu.
# assume download file name is  "MEI_CLI.csv".
# pick up amplitute justfied OECD total data from CSV

$sed -n '/OECD\ -\ Total/p' MEI_CLI_13112019035737453.csv |grep LOLITOAA | grep Ampli | awk -F, 'BEGIN{ORS = ""}{print $(NF-2)","}'

the improved version is below to integrate all query string into one criteria.

sed -n '/LOLITOAA.*Ampli.*OECD\ -\ Total/p' MEI_CLI.csv | awk -F, 'BEGIN{ORS = "";print "c("}{print $(NF-2)","}END{print $(NF-2)")\n"}'


OR do as below.



awk -F, 'BEGIN{ORS = "";print "w <- c("}/LOLITOAA.*Ampli.*OECD\ -\ Total/{print $(NF-2)","}END{print $(NF-2)")\n length(w)\n"}' MEI_CLI.csv 

output sample is like below. now output is in the from of R statement. Construct vector in c() and substitute to w().


100.7565,100.7652,100.7551,100.7274,100.6736,100.5974,100.512,100.4191,100.3163,100.2006,100.0703,99.93036,99.79047,99.6553,99.53469,99.43623,99.36298,99.30513,99.25108,99.20237,99.15969,99.12648,99.10692

w <- c(100.7565,100.7652,100.7551,100.7274,100.6736,100.5974,100.512,100.4191,100.3163,100.2006,100.0703,99.93036,99.79047,99.6553,99.53469,99.43623,99.36298,99.30513,99.25108,99.20237,99.15969,99.12648,99.10692)

length(w)
[1] 23

# The last month will be found in this way.

sed -n '/LOLITOAA.*Ampli.*OECD\ -\ Total/p' MEI_CLI.csv  | awk -F, 'END{print $7}'
"2019-09"

# as the last entry is "2019-09-01" do as below

w <- as.xts(w,last(seq(as.Date("2010-01-01"),as.Date("2019-09-01"),by='months'),length(w))

2019年10月28日月曜日

CLI delta plus period graph.


# draw wp graph starts.


wpx <- wp

wp <- wp[wp[,2] > 10]

par(mfrow=c(2,3))
plot(to.monthly(SP5[,4])[,4][last(paste(  substr(mondate(index(wp)) - as.vector(wp[,2]-1),1,7), substr(mondate(index(wp)),1,7),sep="::"),1)[1]],main="")
plot(to.monthly(SP5[,4])[,4][last(paste(  substr(mondate(index(wp)) - as.vector(wp[,2]-1),1,7), substr(mondate(index(wp)),1,7),sep="::"),2)[1]],main="")
plot(to.monthly(SP5[,4])[,4][last(paste(  substr(mondate(index(wp)) - as.vector(wp[,2]-1),1,7), substr(mondate(index(wp)),1,7),sep="::"),3)[1]],main="")
plot(to.monthly(SP5[,4])[,4][last(paste(  substr(mondate(index(wp)) - as.vector(wp[,2]-1),1,7), substr(mondate(index(wp)),1,7),sep="::"),4)[1]],main="")
plot(to.monthly(SP5[,4])[,4][last(paste(  substr(mondate(index(wp)) - as.vector(wp[,2]-1),1,7), substr(mondate(index(wp)),1,7),sep="::"),5)[1]],main="")
plot(to.monthly(SP5[,4])[,4][last(paste(  substr(mondate(index(wp)) - as.vector(wp[,2]-1),1,7), substr(mondate(index(wp)),1,7),sep="::"),6)[1]],main="")




plot(to.monthly(SP5[,4])[,4][last(paste(  substr(mondate(index(wp)) - as.vector(wp[,2]-1),1,7), substr(mondate(index(wp)),1,7),sep="::"),7)[1]],main="")
plot(to.monthly(SP5[,4])[,4][last(paste(  substr(mondate(index(wp)) - as.vector(wp[,2]-1),1,7), substr(mondate(index(wp)),1,7),sep="::"),8)[1]],main="")
plot(to.monthly(SP5[,4])[,4][last(paste(  substr(mondate(index(wp)) - as.vector(wp[,2]-1),1,7), substr(mondate(index(wp)),1,7),sep="::"),9)[1]],main="")
plot(to.monthly(SP5[,4])[,4][last(paste(  substr(mondate(index(wp)) - as.vector(wp[,2]-1),1,7), substr(mondate(index(wp)),1,7),sep="::"),10)[1]],main="")
plot(to.monthly(SP5[,4])[,4][last(paste(  substr(mondate(index(wp)) - as.vector(wp[,2]-1),1,7), substr(mondate(index(wp)),1,7),sep="::"),11)[1]],main="")
plot(to.monthly(SP5[,4])[,4][last(paste(  substr(mondate(index(wp)) - as.vector(wp[,2]-1),1,7), substr(mondate(index(wp)),1,7),sep="::"),12)[1]],main="")




plot(to.monthly(SP5[,4])[,4][last(paste(  substr(mondate(index(wp)) - as.vector(wp[,2]-1),1,7), substr(mondate(index(wp)),1,7),sep="::"),13)[1]],main="")
plot(to.monthly(SP5[,4])[,4][last(paste(  substr(mondate(index(wp)) - as.vector(wp[,2]-1),1,7), substr(mondate(index(wp)),1,7),sep="::"),14)[1]],main="")
# plot(to.monthly(SP5[,4])[,4][last(paste(  substr(mondate(index(wp)) - as.vector(wp[,2]-1),1,7), substr(mondate(index(wp)),1,7),sep="::"),15)[1]],main="")
# plot(to.monthly(SP5[,4])[,4][last(paste(  substr(mondate(index(wp)) - as.vector(wp[,2]-1),1,7), substr(mondate(index(wp)),1,7),sep="::"),16)[1]],main="")
# plot(to.monthly(SP5[,4])[,4][last(paste(  substr(mondate(index(wp)) - as.vector(wp[,2]-1),1,7), substr(mondate(index(wp)),1,7),sep="::"),17)[1]],main="")
# plot(to.monthly(SP5[,4])[,4][last(paste(  substr(mondate(index(wp)) - as.vector(wp[,2]-1),1,7), substr(mondate(index(wp)),1,7),sep="::"),18)[1]],main="")


par(mfrow=c(1,1))
wp <- wpx
remove(wpx)


# draw wp graph ends.






2019年9月10日火曜日

CLI last 24 months


for memo.

> last(append(cli_xts$oecd,as.xts(99.02,as.Date("2019-07-01"))),24)
                oecd
2017-08-01 100.55800
2017-09-01 100.62020
2017-10-01 100.66770
2017-11-01 100.69360
2017-12-01 100.69740
2018-01-01 100.68160
2018-02-01 100.64710
2018-03-01 100.58640
2018-04-01 100.50300
2018-05-01 100.41050
2018-06-01 100.31140
2018-07-01 100.20330
2018-08-01 100.08190
2018-09-01  99.94489
2018-10-01  99.79828
2018-11-01  99.65163
2018-12-01  99.51014
2019-01-01  99.38431
2019-02-01  99.28202
2019-03-01  99.20721
2019-04-01  99.15151
2019-05-01  99.10212
2019-06-01  99.05597
2019-07-01  99.02000

2019年6月17日月曜日

CLI - composite leading indicator - USA, China and Europe Area


> summary(lm(cli_xts$oecd ~ cli_xts$usa + cli_xts$ea19 + cli_xts$china))

Call:
lm(formula = cli_xts$oecd ~ cli_xts$usa + cli_xts$ea19 + cli_xts$china)

Residuals:
     Min       1Q   Median       3Q      Max
-0.68475 -0.09634  0.03744  0.11972  0.41636

Coefficients:
              Estimate Std. Error t value Pr(>|t|) 
(Intercept)   9.265072   1.036438   8.939   <2e-16 ***
cli_xts$usa   0.402771   0.013317  30.246   <2e-16 ***
cli_xts$ea19  0.438171   0.012302  35.618   <2e-16 ***
cli_xts$china 0.065993   0.006695   9.857   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.1958 on 348 degrees of freedom
  (420 observations deleted due to missingness)
Multiple R-squared:  0.9577, Adjusted R-squared:  0.9573
F-statistic:  2624 on 3 and 348 DF,  p-value: < 2.2e-16

> last(cli_xts,n=12)
                       oecd           usa           china         ea19
2018-05-01 100.33250 100.47600 99.29018 100.53100
2018-06-01 100.22810 100.42840 99.13142 100.40680
2018-07-01 100.11390 100.36340 98.96715 100.28170
2018-08-01  99.98618 100.27230 98.80726 100.14970
2018-09-01  99.84174 100.13410 98.67062 100.01150
2018-10-01  99.68746  99.94146 98.57309  99.87226
2018-11-01  99.53398  99.71795 98.51999  99.73539
2018-12-01  99.38741  99.48866 98.50362  99.59542
2019-01-01  99.25877  99.28568 98.52641  99.45764
2019-02-01  99.15687  99.12820 98.58793  99.32590
2019-03-01  99.08728  99.01845 98.69128  99.20160
2019-04-01  99.03429  98.93993 98.80725  99.08523

for future reference.

> last(tmp.predict,n=6)
        SP5.Open SP5.High SP5.Low SP5.Close  SP5.Volume   spline      eps
 1 2019  2476.96  2708.95 2443.96   2704.10 80391630000 2906.849 2760.208
 2 2019  2702.32  2813.49 2681.83   2784.49 70183430000 2929.656 2779.348
 3 2019  2798.22  2860.31 2722.27   2834.40 78596280000 2956.669 2804.840
 4 2019  2848.63  2949.52 2848.63   2945.83 69604840000 2983.000 2829.000
 5 2019  2952.33  2954.13 2750.52   2752.06 76860120000 3010.000 2854.000
 6 2019  2751.53  2910.61 2728.81   2886.98 33703630000 3037.000 2879.000

2019年6月3日月曜日

histogram performance comparison between cli 1 month delta positive and negative.







when "func()" is "cli_delta_vs_period_return_rate.r"

> hist(as.vector(func("minus","1970-01-01")[,1])-1,col=rgb(0.5,1,0),breaks=20,xlim=c(-0.6,0.5),ylim=c(0,7))
0111 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
[1] 11.34615
[1] 0.9858096
> par(new=T)
> hist(as.vector(func("plus","1970-01-01")[,1])-1,col=rgb(0.5,0,1,alpha=0.4),breaks=10,xlim=c(-0.6,0.5),ylim=c(0,7))

source code is as below.

#
# 1)pick up months whose clie_xts$oecd  is up from the previous
# 2)create the stream of flags, in which up is 1 and down is 0
# 3)compare SPX close price between the end month of cli_xts delta is plus(or minus) and its start.
# 4)return xts objects which contains. start month of period, updown ration, length of months and monthly average return during period.

func <- function(pm="plus",s="1970-01-01",l=1){
  w <- c()
  cat("0")
  cat(length(s))
  if(nchar(s) == 10){ # use nchar() to measure strings length, not length()
    cat("1")
    last_date <- last(index(cli_xts$oecd))
    start_date <- s
    period <- paste(start_date,last_date,sep='::')
  }else{
     # last_date <- l
     period <- s
  }
  # last_date <- last(index(cli_xts$oecd))
  # start_date <- s
  # period <- paste(start_date,last_date,sep='::')
  start_index <- 1
  iteration <- 0
  performance_val <- c()
  period_length <- c()
  lag_month <- l
  result <- c()
  rate <- c()
  plus_or_minus <- pm
  open_p <- c()
  close_p <- c()

# put flag on the months accoring to the parameter. for "minus" cli delta is less than ZERO, for plus the opposite.
  for(i in seq(1,length(diff(cli_xts$oecd,lag=lag_month)[period]),1,)){
    if(plus_or_minus == "minus"){
      if(as.vector(diff(cli_xts$oecd,lag=lag_month)[period])[i] < 0){  # up is "> 0"
        w <- append(w,1)
      }else{
        w <- append(w,0)
      }
    }else if(plus_or_minus == "plus"){
      if(as.vector(diff(cli_xts$oecd,lag=lag_month)[period])[i] > 0){  # up is "> 0"
        w <- append(w,1)
      }else{
        w <- append(w,0)
      }
    }else{
      stop("please use plus or minus as 1st parameter")
    }
  }
  month_flag <- 0 # status flag
# check stream and when flag is changes 0 to 1. it is the start of period.
  for(i in seq(1,length(diff(cli_xts$oecd,lag=lag_month)[period]),1,)){
    if(w[i] == 1){
      if(month_flag == 0){ # when w is 1 and month_flag is 0, the period starts
        month_flag <- 1
        start_price <- as.vector(to.monthly(SP5[period])[,1][i]) #dc 0531
        # print(index(to.monthly(SP5[period])[,4][i]))
        # cat("from ")
        # cat(as.character(as.Date(index(to.monthly(SP5[period])[,4][i]))))
        start_index <- i
      }
      # dc 0602 add output at the end of loop
      if(i == length(diff(cli_xts$oecd,lag=lag_month)[period])){
        # print("end of the loop")
        result <- append(result,as.xts(as.vector(to.monthly(SP5[period])[,4][i]) / start_price,index(to.monthly(SP5[period])[,4][i])))
                period_length <- append(period_length,i-start_index)
        performance_val <- append(performance_val,as.vector(to.monthly(SP5[period])[,4][i-1]) / start_price)
        open_p <- append(open_p,start_price)
        close_p <- append(close_p,as.vector(to.monthly(SP5[period])[,4][i-1]))
        rate <- append(rate,last(performance_val)**(1/last(period_length))-1)
      }
# check stream and when flag is changes 1 to 0. it is the start of period.
    }else if(w[i] ==0){
      if(month_flag == 1){ # when w is 0 and month_flag is 1, the period ends
        # cat(" ")
        # cat(i - start_index)
        # cat(" month(s)")
        # cat("\n")
        iteration <- iteration +1
        cat(iteration)
        cat(" ")
        result <- append(result,as.xts(as.vector(to.monthly(SP5[period])[,4][i-1]) / start_price,index(to.monthly(SP5[period])[,4][i-1])))
        # print(as.xts(as.vector(to.monthly(SP5[period])[,4][i]) / start_price,index(to.monthly(SP5[period])[,4][i])))
        # print(i - start_index)
        month_flag <- 0 # when period ends, intialize the flag.
        period_length <- append(period_length,i-start_index)
        performance_val <- append(performance_val,as.vector(to.monthly(SP5[period])[,4][i-1]) / start_price)
        open_p <- append(open_p,start_price)
        close_p <- append(close_p,as.vector(to.monthly(SP5[period])[,4][i-1]))
        rate <- append(rate,last(performance_val)**(1/last(period_length))-1)
      }
    }
  }
  cat("\n")
  print(mean(period_length))
  print(mean(performance_val))
  return(merge(result,period_length,rate,open_p,close_p))

}
# t_minus <- performance_val
# t_plus <- performance_val
func("minus","1970-01-01")




2019年6月2日日曜日

S&P 500 performance comparison between CLI 1 month delta is positive and negative.




The period when OECD Composite Leading Inditacor 1 month delta is positive and negative come one after another. Here to calculate S&P 500 return during those each period.

The parameter "minus" indicates this is period when CLI delta is negative. The result is they came 25 times since 1970/1/1. The average length is approx. 11.35 month and the average return is -1.41%.

On the other hand, positive have come 25 times. The average length is 11.8 month and the average return is 19.85%.

> func("minus","1970-01-01")
0111 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
[1] 11.34615
[1] 0.9858096
            result period_length          rate  open_p close_p
Sep 1970 0.9157072             9 -0.0097365802   92.06   84.30
Jan 1975 0.6634491            24 -0.0169506551  116.03   76.98
Jul 1977 0.9867239            14 -0.0009541901  100.18   98.85
Jun 1980 1.2136407            20  0.0097282501   94.13  114.24
Aug 1982 0.8803035            20 -0.0063541523  135.76  119.51
Nov 1984 1.0010403            10  0.0001039829  163.41  163.58
Jul 1986 1.3032344            17  0.0157013530  181.18  236.12
Mar 1988 0.7849672             7 -0.0339963144  329.81  258.89
Oct 1989 1.2255509            10  0.0205472899  277.72  340.36
Jan 1991 1.0362770            11  0.0032447523  331.89  343.93
Dec 1991 1.0547758             4  0.0134213316  395.43  417.09
Nov 1992 1.0395228             7  0.0055527568  414.95  431.35
Jun 1995 1.1773542             9  0.0183066251  462.69  544.75
Jan 1996 1.0937576             3  0.0303236982  581.50  636.02
Oct 1998 1.1598155            13  0.0114699680  947.28 1098.67
Sep 2001 0.6946176            18 -0.0200405678 1498.58 1040.94
Mar 2003 0.7875979            11 -0.0214722593 1076.92  848.18
May 2005 1.0579732            14  0.0040334726 1126.21 1191.50
Jun 2006 1.0001180             1  0.0001180284 1270.05 1270.20
Feb 2009 0.4885423            20 -0.0351826442 1504.66  735.09
Jun 2010 0.9479536             1 -0.0520464319 1087.30 1030.71
Nov 2011 0.9385236             9 -0.0070249109 1328.64 1246.96
Sep 2012 1.0547405             7  0.0076425915 1365.90 1440.67
Aug 2014 1.0690570             5  0.0134449700 1873.96 2003.37
Apr 2016 1.0031085            16  0.0001940004 2058.90 2065.30
Mar 2019 1.0599561            15  0.0034295890 2645.10 2784.49



> func("plus","1970-01-01")
0111 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
[1] 11.8
[1] 1.198459
            result period_length         rate  open_p close_p
Jan 1973 1.3763938            28  0.011474868   84.30  116.03
May 1976 1.3013769            16  0.016600207   76.98  100.18
Oct 1978 0.9423369            15 -0.003951666   98.85   93.15
Dec 1980 1.1883753             6  0.029182211  114.24  135.76
Jan 1984 1.3672189            17  0.018569047  119.52  163.41
Feb 1985 1.1075926             3  0.034649717  163.58  181.18
Aug 1987 1.3967474            13  0.026036742  236.12  329.80
Dec 1988 1.0727335             9  0.007831630  258.89  277.72
Feb 1990 0.9751147             4 -0.006280247  340.36  331.89
Aug 1991 1.1498066             7  0.020142134  343.91  395.43
Apr 1992 0.9950124             4 -0.001249244  417.03  414.95
Sep 1994 1.0727019            22  0.003195123  431.35  462.71
Oct 1995 1.0674621             4  0.016454915  544.75  581.50
Sep 1997 1.4893871            20  0.020117927  636.02  947.28
Mar 2000 1.3639946            17  0.018427587 1098.67 1498.58
Apr 2002 1.0345650             7  0.004866239 1040.94 1076.92
Mar 2004 1.3277960            12  0.023908021  848.18 1126.21
May 2006 1.0659588            12  0.005337085 1191.50 1270.09
Jun 2007 1.1836842            12  0.014151848 1270.06 1503.35
May 2010 1.4932221            15  0.027089509  729.57 1089.41
Feb 2011 1.2871884             8  0.032060761 1031.10 1327.22
Feb 2012 1.0952515             3  0.030792577 1246.91 1365.68
Mar 2014 1.2994239            18  0.014657552 1440.90 1872.34
Dec 2014 1.0273593             4  0.006770750 2004.07 2058.90
Nov 2017 1.2807753            19  0.013109691 2067.17 2647.58

their results are tested by "t.test()". p-value is 7.391e-05. Therefore they should be distiguised each other with the probability of 99.99261%

> t.test(func("plus","1970-01-01")[,1],func("minus","1970-01-01")[,1])
<skip>
Welch Two Sample t-test

data:  func("plus", "1970-01-01")[, 1] and func("minus", "1970-01-01")[, 1]
t = 4.3306, df = 48.714, p-value = 7.391e-05
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 0.1138069 0.3109339
sample estimates:
mean of x mean of y
1.1984592 0.9860888

There is a histogram in the next entry for the better comparison.

see source code github. commit id is d20207a

2019年5月8日水曜日

cli_5mon.r : draw spiral graph of cli 5 months delta vs. its reading.



do "> func("2011-01-01::",9)" after execute all codes below.

#
# plese refer the latter half of
# https://00819.blogspot.com/2019/03/vix-cli-6-month-delta-and-s.html
# lag is set to  5 months.
#
#   s = start date of the spiral like "1992-01-01::" DON'T FORGET DOUBLE COLON!!
#       Don't set before "1964-01-01"
#   l = length of years like 9. recommend to use equal or less than 9. Don't exceed the current end of data.
#
#   use like  > func("2001-01-01::",5)
#
func <- function(s="2011-01-01::",l=9){

  head_of_record <- "1964-01-01::"
  print(head_of_record)
  offset <- length(seq(as.Date(head_of_record),as.Date(s),by='months'))
  max_length <- length(seq(as.Date(head_of_record),as.Date(index(last(cli_xts))),by='months'))
  len_mon <- l*12-1
  lag_month <- 5
  # when the end period exceeds the current end, adjust # of months and years to avoid the counters go beyond the limit.
  if(offset + len_mon > max_length){
    len_mon <- max_length - offset
    l <- ceiling(len_mon/12) # ceiling is to round up
  }
  # len_mon <- l
  # print(offset)
  # print(len_mon)
  # print(seq(as.Date(head_of_record),as.Date(s),by='months')[offset])
  # print(seq(as.Date(head_of_record),as.Date("2100-01-01"),by='months')[offset+len_mon])

  # print(offset)
  plot.default(na.trim(diff(cli_xts$oecd,lag=lag_month))[head_of_record][offset:(offset+len_mon)],cli_xts$oecd[head_of_record][offset:(offset+len_mon)],type='b')
  # print(offset)
  tmp <- par('usr')
  # par(new=T)
  plot.default(na.trim(diff(cli_xts$oecd,lag=lag_month))[head_of_record][offset:(offset+len_mon)],cli_xts$oecd[head_of_record][offset:(offset+len_mon)],type='b',xlim=c( tmp[1],tmp[2]), ylim=c(tmp[3], tmp[4]),lwd=1,main=paste("from",substr(s,1,10),"for",len_mon+1,"months",sep=" "),ylab="",xlab="")
  par(new=T)
  for(i in seq(0,l-1,1)){
    print(i)
    # print(offset)
    # print(offset+i*12)
    # print(offset+i*12+11)
    par(new=T)
      # when the end period exceeds the current end, adjust # of months to avoid OOB
      # otherwise months to go in each iteration is always 11.
    if(offset+i*12+11 < max_length){
      m <- 11
    }else{
      #adjust months to go as go within max_length.
      m <- max_length - (offset+i*12)
    }
    # print(m)
    # print(max_length)
    # print(len_mon)
    # print(offset)
    plot.default(na.trim(diff(cli_xts$oecd,lag=lag_month))[head_of_record][(offset+i*12):(offset+i*12+m)],cli_xts$oecd[head_of_record][(offset+i*12):(offset+i*12+m)],type='b',xlim=c( tmp[1],tmp[2]), ylim=c(tmp[3], tmp[4]),col=i+1,lwd=2,ylab="",xlab="",axes = F)
    if(i == 6){
      # when i ==6 "yellow" is used for dots, but offers poor visibility. plot 'x' upon them to improve.
      par(new=T)
      plot.default(na.trim(diff(cli_xts$oecd,lag=lag_month))[head_of_record][(offset+i*12):(offset+i*12+m)],cli_xts$oecd[head_of_record][(offset+i*12):(offset+i*12+m)],type='p',xlim=c( tmp[1],tmp[2]), ylim=c(tmp[3], tmp[4]),pch='x',ylab="",xlab="",axes = F)
    }
    par(new=T)
  }
  abline(v=0)
  abline(h=100)
  # abline(v=seq(0.5,-1,-0.1),col=6,lty=3)
  # automatically calculate the upper and lower limit of vline.
  # might be able to use 'floor(tmp[2]*10)/10' instead of round(x,digit=1)
  abline(v=seq(ceiling((tmp[1]*10))/10,floor(tmp[2]*10)/10,0.1),col=6,lty=3)

}
func("2001-01-01::",9)


2019年5月6日月曜日

CLI 5 month delta and 1 month delta vs. SPX price move.


months will be categorized into 4 groups. 1)both 5 month and 1 month delta are positive -> bp 2) both negative -> bm 3) 5month negative and 1 month positive -> mp, and 4) 5 month positive and 1 month negative -> pm.
count # of months when index moves more than given parameter. when the prameter is (0.9,"c"), close is more than 10% lower than open. (0.85,"h"), low is more than 15% lower than high.

the sample below is to calculate # of months when price moved down more than 10% on close base. the index moves down more than 10% 8 time when both delta are minus, whereas never when both are positive.

> func(0.9,"c")
bp [1] 289
bm [1] 287
mp [1] 55
pm [1] 55
pm+bm+mp [1] 397
correction sp5 vs. bm[1] "1973-11-01" "1974-09-01" "1980-03-01" "1998-08-01" "2002-09-01" "2008-10-01" "2009-02-01" "2018-12-01"
correction sp5 vs. bpDate of length 0
correction sp5 vs. pm[1] "1987-10-01"
correction sp5 vs. mpDate of length 0




# delta is parameter relation against month close vs. open, when delta is 0.9.
# close price is 10% down from open
# when m is "c", its open vs. close. in the case of "h", it is high versus low
#
func <- function(delta=0.9,m="c"){

  ind_bp <- index(na.omit(diff(cli_xts$oecd,lag=5))["1962::"])[na.omit(diff(cli_xts$oecd,lag=5))["1962::"] > 0 & na.omit(diff(cli_xts$oecd,lag=1))["1962::"] > 0]

  ind_bm <- index(na.omit(diff(cli_xts$oecd,lag=5))["1962::"])[na.omit(diff(cli_xts$oecd,lag=5))["1962::"] < 0 & na.omit(diff(cli_xts$oecd,lag=1))["1962::"] < 0]

  ind_mp <- index(na.omit(diff(cli_xts$oecd,lag=5))["1962::"])[na.omit(diff(cli_xts$oecd,lag=5))["1962::"] < 0 & na.omit(diff(cli_xts$oecd,lag=1))["1962::"] > 0]

  ind_pm <- index(na.omit(diff(cli_xts$oecd,lag=5))["1962::"])[na.omit(diff(cli_xts$oecd,lag=5))["1962::"] > 0 & na.omit(diff(cli_xts$oecd,lag=1))["1962::"] < 0]

# sp_correction_ind <- index(SP5["1962::"][SP5["1962::"][,4] / SP5["1962::"][,1] < delta])
# sp_correction_ind <- index(SP5["1962::"][SP5["1962::"][,3] / SP5["1962::"][,2] < delta])

  # if(c == "h"){ print("T")}
  # if(c == "o"){ print("S")}else{print("F")}
  sp_correction_ind <- c()
  a <- m
  # switch(a,               # switch(文字列,
  #   "h" = append(sp_correction_ind,index(SP5["1962::"][SP5["1962::"][,3] / SP5["1962::"][,2] < delta])),
  #   "c" = append(sp_correction_ind,index(SP5["1962::"][SP5["1962::"][,4] / SP5["1962::"][,1] < delta])),
  #   print("?")            #  一致するものが
  # )
  if(a == "h"){
    sp_correction_ind <- index(SP5["1962::"][SP5["1962::"][,3] / SP5["1962::"][,2] < delta])
  }
  else if(a == "c"){
    sp_correction_ind <- index(SP5["1962::"][SP5["1962::"][,4] / SP5["1962::"][,1] < delta])
  }
  else{
    print("?")
  }

# cat("sp_corr ");print(sp_correction_ind)
  cat("bp ");print(length(ind_bp))
  cat("bm ");print(length(ind_bm))
  cat("mp ");print(length(ind_mp))
  cat("pm ");print(length(ind_pm))
  # cat("pm+bm+mp ");print(length(append(append(ind_pm,ind_bm),ind_mp)))
  cat("pm+bm+mp ");print(length(c(ind_pm,ind_bm,ind_mp)))


  cat("correction sp5 vs. bm");print(sp_correction_ind[is.element(sp_correction_ind,ind_bm)])

  cat("correction sp5 vs. bp");print(sp_correction_ind[is.element(sp_correction_ind,ind_bp)])
  cat("correction sp5 vs. pm");print(sp_correction_ind[is.element(sp_correction_ind,ind_pm)])
  cat("correction sp5 vs. mp");print(sp_correction_ind[is.element(sp_correction_ind,ind_mp)])


# t.test(as.vector(VIX[,4][ind_bp]),as.vector(VIX[,4][ind_bm]))
# print("######################################")
# t.test(as.vector(VIX[,4][ind_bp]),as.vector(VIX[,4][append(append(ind_pm,ind_bm),ind_mp)]))
# # t.test(as.vector(VIX[,4][ind_bp]),as.vector(VIX[,4][ind_bm]))

}
func(0.9,"c")


CLI 5 month delta vs. SPX decline.






#
# pick up months whose monthlyreturn is less than 0.05 and put them to events
#
# argument 's' could be either "2000-01-01::" or "2000-01-01::2018-12-31"
#
func <- function(s="2000-01-01::",c=0.95){
  start_date <- s
  change_rate <- c
  # events <- xts(round(monthlyReturn(GSPC[start_date])[monthlyReturn(GSPC[start_date]) < -0.05],digits = 3),as.Date(mondate(index(monthlyReturn(GSPC[start_date])[monthlyReturn(GSPC[start_date]) < -0.05]))))

  events <- xts(round(SP5[start_date][,4]/SP5[start_date][,1][SP5[start_date][,4]/SP5[start_date][,1] < change_rate],digits =4),as.Date(mondate(index(SP5[start_date][,4]/SP5[start_date][,1][SP5[start_date][,4]/SP5[start_date][,1] < change_rate]))))

  #
  # draw graph of cli 5 months delta of oecd all, usa and china.
  #
  plot(diff(cli_xts$oecd,lag=5)[start_date],type='h')
  # addEventLines(events, srt=90, pos=2,col=10)  # this causes misalinghment
  # addEventLines(events[c(6,7)], srt=90, pos=2,col=10)
  #
  # somehow "addEventLines(events, srt=90, pos=2,col=10)" works well
  # vertical lines are put at wrong places. instead of it, put loop.
  #
  for(i in seq(1,length(events),1)){
    addEventLines(events[i], srt=90, pos=2,col=4)
  }
  events
  #
  # and only the first entry to place.
  #
  addEventLines(events[1], srt=90, pos=2,col=4)
}
func("2000-01-01::",0.95)

2019年3月26日火曜日

New model CLI 6 month delta, EPS, PA, UC and CS


# New model CLI 6 month delta, EPS, PA, UC and CS
# when k2k is like
k2k
# [1] "2000-01-01::2018-12-31"
# calculate cli 6 month delta
diff(cli_xts,lag=6)[k2k]
summary(lm(apply.quarterly(SP5[,4][k2k],mean) ~ eps_year_xts[k2k]+apply.quarterly(PA[k2k],mean)+apply.quarterly(CS[k2k],mean)+apply.quarterly(UC[k2k],mean)+apply.quarterly(diff(cli_xts$oecd,lag=6)[k2k],mean)))

# Call:
# lm(formula = apply.quarterly(SP5[, 4][k2k], mean) ~ eps_year_xts[k2k] +
#     apply.quarterly(PA[k2k], mean) + apply.quarterly(CS[k2k],
#     mean) + apply.quarterly(UC[k2k], mean) + apply.quarterly(diff(cli_xts$oecd,
#     lag = 6)[k2k], mean))
#
# Residuals:
#      Min       1Q   Median       3Q      Max
# -154.102  -50.869   -2.623   56.146  165.534
#
# Coefficients:
#                                                           Estimate Std. Error t value Pr(>|t|)
# (Intercept)                                             -9.881e+03  3.509e+02 -28.158  < 2e-16 ***
# eps_year_xts[k2k]                                        5.913e+00  5.475e-01  10.800  < 2e-16 ***
# apply.quarterly(PA[k2k], mean)                           8.689e-02  2.975e-03  29.204  < 2e-16 ***
# apply.quarterly(CS[k2k], mean)                          -5.506e+00  4.333e-01 -12.708  < 2e-16 ***
# apply.quarterly(UC[k2k], mean)                           1.126e-01  3.999e-02   2.816  0.00632 **
# apply.quarterly(diff(cli_xts$oecd, lag = 6)[k2k], mean)  7.684e+01  9.961e+00   7.715 6.12e-11 ***
# ---
# Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# Residual standard error: 74.13 on 70 degrees of freedom
# Multiple R-squared:  0.9806, Adjusted R-squared:  0.9792
# F-statistic: 706.6 on 5 and 70 DF,  p-value: < 2.2e-16

result.eps <- lm(apply.quarterly(SP5[,4][k2k],mean) ~ eps_year_xts[k2k]+apply.quarterly(PA[k2k],mean)+apply.quarterly(CS[k2k],mean)+apply.quarterly(UC[k2k],mean)+apply.quarterly(diff(cli_xts$oecd,lag=6)[k2k],mean))

result.gpuc <- lm(apply.quarterly(SP5[k2k],mean)[,1] ~ PAq[k2k] * UCq[k2k] * G[k2k]*CSq[k2k] - UCq[k2k] -G[k2k] - PAq[k2k]*G[k2k] - UCq[k2k]*G[k2k]*CSq[k2k])

SP5.result <- merge(residuals(result.gpuc),predict(result.gpuc),residuals(result.eps),predict(result.eps))
GSPC.predict <- merge(to.monthly(GSPC)[substr(k2k,11,23)],last(spline(seq(1,length(SP5.result[,1]),1),as.vector(SP5.result[,2]),n=length(SP5.result[,1])*3+1)$y,n=length(to.monthly(GSPC)[,1][substr(k2k,11,23)])),last(spline(seq(1,length(SP5.result[,1]),1),as.vector(SP5.result[,4]),n=length(SP5.result[,1])*3+1)$y,n=length(to.monthly(GSPC)[,1][substr(k2k,11,23)])),suffixes=c('','spline','eps'))


plot(merge(GSPC.predict[,4],GSPC.predict[,7],GSPC.predict[,8],GSPC.predict[,4]-GSPC.predict[,7],GSPC.predict[,4]-GSPC.predict[,8]),main="GSPC.predict[,4] vs. GSPC.predict[,7]",grid.ticks.on='months')
tmp.legend <- "Black: actual \nRed: spline\nGreen: eps"
addLegend(legend.loc = "topleft", legend.names = tmp.legend,col=3)
tmp.addTA <- as.xts(rep(2800,length(index(GSPC.predict))),index(GSPC.predict))
addSeries(tmp.addTA,on=1,col=6,lwd=1)





result.eps$coefficients[1]+result.eps$coefficients[2]*eps_year_xts["2019-01"]+result.eps$coefficients[3]*as.vector((last(PA)))+result.eps$coefficients[4]*as.vector((last(CS)))+result.eps$coefficients[5]*as.vector((last(UC)))+result.eps$coefficients[6]*as.vector(last(diff(cli_xts$oecd, lag = 6)))

2019年3月17日日曜日

VIX, CLI 6 month delta and S&P500


> last(cli_xts)
               oecd      usa
2019-01-01 99.12973 99.04795

Then synchronized data as SP5[,4]["2000::2019-01"],

#
# VIX histgram when CLI 6 month delta is positve as red and negative as blue, overlaid by S&P500.
#
mnt <- index(cli_xts$oecd["2000::2019"][cli_xts$oecd["2000::2019"]/as.vector(cli_xts$oecd["1999-07-01::2018-07-01"]) < 1])
plot.zoo(merge(VIX["2000::2019-01"][,4],VIX[mnt][,4]),type='h',col = c("red", "blue"), plot.type = "single")
abline(v=seq(as.Date("2001-01-01"),as.Date("2019-01-01"),by='years'), col=rgb(0,1,0,alpha=0.9),lty=2)
par(new=T)
plot.default(SP5[,4]["2000::2019-01"],axes=F,type='l')




#
# draw CLI 6 month delta vs. its reading graph.
#
#
plot.default(na.trim(diff(cli_xts$oecd["2010-07-01::"],lag=6)),cli_xts$oecd["2011::"],type='b')
tmp <- par('usr')
plot.default(na.trim(diff(cli_xts$oecd["2010-07-01::"],lag=6)),cli_xts$oecd["2011::"],type='b',xlim=c( tmp[1],tmp[2]), ylim=c(tmp[3], tmp[4]))
par(new=T)
plot.default(na.trim(diff(cli_xts$oecd["2018-07-01::2019"],lag=6)),cli_xts$oecd["2019"],xlim=c( tmp[1],tmp[2]), ylim=c(tmp[3], tmp[4]),col=9,lwd=2)
par(new=T)
plot.default(na.trim(diff(cli_xts$oecd["2017-07-01::2018"],lag=6)),cli_xts$oecd["2018"],xlim=c( tmp[1],tmp[2]), ylim=c(tmp[3], tmp[4]),col=2,lwd=2)
par(new=T)
plot.default(na.trim(diff(cli_xts$oecd["2016-07-01::2017"],lag=6)),cli_xts$oecd["2017"],xlim=c( tmp[1],tmp[2]), ylim=c(tmp[3], tmp[4]),col=3,lwd=2)
par(new=T)
plot.default(na.trim(diff(cli_xts$oecd["2015-07-01::2016"],lag=6)),cli_xts$oecd["2016"],xlim=c( tmp[1],tmp[2]), ylim=c(tmp[3], tmp[4]),col=4,lwd=2)
par(new=T)
plot.default(na.trim(diff(cli_xts$oecd["2014-07-01::2015"],lag=6)),cli_xts$oecd["2015"],xlim=c( tmp[1],tmp[2]), ylim=c(tmp[3], tmp[4]),col=5,lwd=2)
par(new=T)
plot.default(na.trim(diff(cli_xts$oecd["2013-07-01::2014"],lag=6)),cli_xts$oecd["2014"],xlim=c( tmp[1],tmp[2]), ylim=c(tmp[3], tmp[4]),col=6,lwd=2)
par(new=T)
plot.default(na.trim(diff(cli_xts$oecd["2012-07-01::2013"],lag=6)),cli_xts$oecd["2013"],xlim=c( tmp[1],tmp[2]), ylim=c(tmp[3], tmp[4]),col=7,lwd=2)
par(new=T)
plot.default(na.trim(diff(cli_xts$oecd["2011-07-01::2012"],lag=6)),cli_xts$oecd["2012"],xlim=c( tmp[1],tmp[2]), ylim=c(tmp[3], tmp[4]),col=8,lwd=2)
par(new=T)
plot.default(na.trim(diff(cli_xts$oecd["2010-07-01::2011"],lag=6)),cli_xts$oecd["2011"],xlim=c( tmp[1],tmp[2]), ylim=c(tmp[3], tmp[4]),col=9,lwd=2,bg='grey')
abline(v=0)
abline(h=100)



2019年3月14日木曜日

CLI - composite leading indicator - OECD 2019 version.


THIS ENTRY SUPERCEDS THIS PAGE

1. preparation

# download csv from oecd(https://data.oecd.org/leadind/composite-leading-indicator-cli.htm) to "~/Downloads"
# use file name "CLI3.csv".
# this file contains multiple regions data. you have to specify its name.
# extract USA only entries
# execute commands below at "~/Download".
#
sed -n '/USA/p' CLI3.csv |awk -F, '{print $6"-01,"$7}'  |sed 's/\"//g' |awk 'BEGIN{print "DATE,DATA"}{print $0}' > usa.csv
# extract OECD entries and exclude OECDE
sed -n '/OECD[^E]/p' CLI3.csv |awk -F, '{print $6"-01,"$7}'  |sed 's/\"//g' |awk 'BEGIN{print "DATE,DATA"}{print $0}' > oecd.csv

2.drawing graph

# read data from csv.
#
cli_xts <- merge(as.xts(read.zoo(read.csv("~/Downloads/oecd.csv"))),as.xts(read.zoo(read.csv("~/Downloads/usa.csv"))),suffixes = c("oecd","usa"
))
#
#  set start date and end date
#
start_date <- as.Date("2014-07-01")
end_date <- as.Date("2019-01-01")
#
#
cli_xts$oecd[paste(start_date,end_date,sep="::")]
period_base <- paste(start_date,end_date,sep="::")
diff_mon <- 6
period_compare <- paste(as.Date(as.yearmon(mondate(as.Date(start_date))-diff_mon )),as.Date(as.yearmon(mondate(as.Date(end_date))-diff_mon )),sep="::")
paste("2018-01",end_date,sep="::")
paste("2017-07",as.Date(as.yearmon(mondate(as.Date(end_date))-diff_mon )),sep="::")

plot.default((cli_xts$oecd[period_base]   / as.vector(cli_xts$oecd[period_compare])-1)*100,cli_xts$oecd[period_base])
tmp <- par('usr')
plot.default((cli_xts$oecd[period_base] / as.vector(cli_xts$oecd[period_compare])-1)*100,cli_xts$oecd[period_base] ,xlim=c( tmp[1],tmp[2]), ylim=c(tmp[3], tmp[4]),type='b')
par(new=T)
#
#  code for CY2019 and after.
#
#
if(as.Date("2018-12-31") < end_date){
  # add line to data beyond "2019-01-01" when time has come.
  plot.default((cli_xts$oecd[paste("2019-01",end_date,sep="::")] / as.vector(cli_xts$oecd[paste("2018-07",as.Date(as.yearmon(mondate(as.Date(end_date))-diff_mon )),sep="::")])-1)*100,cli_xts$oecd[paste("2019-01",end_date,sep="::")] ,xlim=c( tmp[1],tmp[2]), ylim=c(tmp[3], tmp[4]),col=2,lwd=2)

  par(new=T)
  plot.default((cli_xts$oecd["2018-01::2018-12"] / as.vector(cli_xts$oecd["2017-07::2018-06"])-1)*100,cli_xts$oecd["2018-01::2018-12"], xlim=c( tmp[1],tmp[2]), ylim=c(tmp[3], tmp[4]),col=6)

} else{
  plot.default((cli_xts$oecd[paste("2018-01",end_date,sep="::")] / as.vector(cli_xts$oecd[paste("2017-07",as.Date(as.yearmon(mondate(as.Date(end_date))-diff_mon )),sep="::")])-1)*100,cli_xts$oecd[paste("2018-01",end_date,sep="::")] ,xlim=c( tmp[1],tmp[2]), ylim=c(tmp[3], tmp[4]),col=2,lwd=2)
}

par(new=T)
plot.default((cli_xts$oecd["2017-01::2017-12"] / as.vector(cli_xts$oecd["2016-07::2017-06"])-1)*100,cli_xts$oecd["2017-01::2017-12"], xlim=c( tmp[1],tmp[2]), ylim=c(tmp[3], tmp[4]),col=3)
par(new=T)
plot.default((cli_xts$oecd["2016-01::2016-12"] / as.vector(cli_xts$oecd["2015-07::2016-06"])-1)*100,cli_xts$oecd["2016-01::2016-12"], xlim=c( tmp[1],tmp[2]), ylim=c(tmp[3], tmp[4]),col=4)
par(new=T)
plot.default((cli_xts$oecd["2015-01::2015-12"] / as.vector(cli_xts$oecd["2014-07::2015-06"])-1)*100,cli_xts$oecd["2015-01::2015-12"], xlim=c( tmp[1],tmp[2]), ylim=c(tmp[3], tmp[4]),col=5
)
abline(h=100)
abline(v=0)
legend("topleft", legend = "Light Blue: 2015\nBlue: 2016\nLime: 2017\nPurple: 2018\nRed: 2019",bty='n')


2019年3月12日火曜日

CLI 6 months delta ,VIX and S&P500. plot.zoo



mnt <- index(cli_xts$oecd["2000::2018"][cli_xts$oecd["2000::2018"]/as.vector(cli_xts$oecd["1999-07-01::2018-06-01"]) < 1])
plot.zoo(merge(VIX["2000::2018"][,4],VIX[mnt][,4]),type='h',col = c("red", "blue"), plot.type = "single")
abline(v=seq(as.Date("2001-01-01"),as.Date("2019-01-01"),by='years'), col='green')
par(new=T)
plot.default(SP5[,4]["2000::2018"],axes=F)





VIX vs. CLI 6 month delta. translucent histgram hist setdiff


# downloads ^VIX historical data from Yahoo Finance.
#
VIX <- as.xts(read.zoo(read.csv("~/VIX.csv")))

# select dates when CLI moves negative during 6 months.
mnt <- index(cli_xts$oecd["2000::2018"][cli_xts$oecd["2000::2018"]/as.vector(cli_xts$oecd["1999-07-01::2018-06-01"]) < 1])

  [1] "2000-06-01" "2000-07-01" "2000-08-01" "2000-09-01" "2000-10-01" "2000-11-01" "2000-12-01" "2001-01-01"
  [9] "2001-02-01" "2001-03-01" "2001-04-01" "2001-05-01" "2001-06-01" "2001-07-01" "2001-08-01" "2001-09-01"
<skip>
[105] "2018-05-01" "2018-06-01" "2018-07-01" "2018-08-01" "2018-09-01" "2018-10-01" "2018-11-01" "2018-12-01"

# select dates of positve movement.
> as.Date(setdiff(seq(as.Date("2000-01-01"),as.Date("2018-12-01"),by='months'),mnt))
  [1] "2000-01-01" "2000-02-01" "2000-03-01" "2000-04-01" "2000-05-01" "2001-12-01" "2002-01-01" "2002-02-01"
  [9] "2002-03-01" "2002-04-01" "2002-05-01" "2002-06-01" "2002-07-01" "2003-06-01" "2003-07-01" "2003-08-01"
<skip>
[105] "2017-03-01" "2017-04-01" "2017-05-01" "2017-06-01" "2017-07-01" "2017-08-01" "2017-09-01" "2017-10-01"
[113] "2017-11-01" "2017-12-01" "2018-01-01" "2018-02-01"


as.vector(VIX[,2][as.Date(setdiff(seq(as.Date("2000-01-01"),as.Date("2018-12-01"),by='months'),mnt))])
#   [1] 29.00 28.12 25.87 34.31 32.89 26.38 26.88 27.32 21.12 24.50 22.71 30.98 48.46 22.81 20.80 23.89 23.26 22.82 19.61
#  [20] 18.86 18.68 18.06 22.67 17.98 20.45 17.04 14.39 17.19 15.66 12.44 14.56 13.73 13.34 13.09 19.87 23.81 19.58 16.15
#  [39] 14.49 12.91 12.55 12.68 12.83 19.01 21.25 15.46 14.60 18.98 24.17 37.50 32.77 33.05 28.39 29.57 31.59 31.84 24.51
#  [58] 28.01 29.22 19.94 23.20 48.20 37.38 37.58 28.92 25.13 24.34 23.84 21.43 20.08 23.22 31.28 19.07 21.24 21.06 25.46
#  [77] 15.93 19.28 16.82 18.20 16.35 21.91 17.32 17.81 17.49 21.34 14.14 16.75 18.99 21.48 18.22 17.85 14.49 12.89 25.20
#  [96] 23.43 22.81 14.93 20.51 17.95 23.01 14.72 14.07 12.96 15.11 16.28 16.30 15.16 13.05 17.28 14.06 13.20 14.51 14.58
# [115] 15.42 50.30
# > as.vector(VIX[,2][mnt])
#   [1] 25.01 21.65 20.84 22.66 30.80 31.11 32.32 30.80 30.62 35.45 35.20 26.49 24.42 25.61 25.84 49.35 36.95 34.57 45.21
#  [20] 41.86 43.44 32.60 31.20 35.33 35.66 34.40 30.04 22.33 17.93 19.97 15.98 16.87 16.76 13.74 14.75 13.20 14.89 18.59
#  [39] 17.70 13.34 13.92 14.41 28.82 24.15 31.09 24.86 37.57 29.70 35.60 25.61 20.95 24.56 30.81 23.86 48.40 89.53 81.48
#  [58] 68.60 57.36 53.16 53.25 45.60 36.88 20.03 24.65 25.94 48.00 43.87 46.88 37.53 30.91 23.73 21.98 27.73 21.00 19.25
#  [77] 18.96 19.65 19.40 23.23 17.11 17.57 17.08 31.06 15.93 17.19 16.66 16.36 19.80 20.05 53.29 33.82 25.23 20.67 26.81
#  [96] 32.09 30.90 20.17 17.09 17.65 26.72 17.04 26.22 25.72 18.78 19.61 18.08 16.86 15.63 28.84 23.81 36.20

# compare monthly high between negative vs. positve
t.test(as.vector(VIX[,2][mnt]),as.vector(VIX[,2][as.Date(setdiff(seq(as.Date("2000-01-01"),as.Date("2018-12-01"),by='months'),mnt))]))

# Welch Two Sample t-test
#
# data:  as.vector(VIX[, 2][mnt]) and as.vector(VIX[, 2][as.Date(setdiff(seq(as.Date("2000-01-01"), as.vector(VIX[, 2][mnt]) and     as.Date("2018-12-01"), by = "months"), mnt))])
# t = 4.8495, df = 174.7, p-value = 2.725e-06
# alternative hypothesis: true difference in means is not equal to 0
# 95 percent confidence interval:
#  4.164086 9.879529
# sample estimates:
# mean of x mean of y
#  28.55741  21.53560

# compare monthly close.
t.test(as.vector(VIX[,4][mnt]),as.vector(VIX[,4][as.Date(setdiff(seq(as.Date("2000-01-01"),as.Date("2018-12-01"),by='months'),mnt))]))

# Welch Two Sample t-test
#
# data:  as.vector(VIX[, 4][mnt]) and as.vector(VIX[, 4][as.Date(setdiff(seq(as.Date("2000-01-01"), as.vector(VIX[, 4][mnt]) and     as.Date("2018-12-01"), by = "months"), mnt))])
# t = 4.7722, df = 178.21, p-value = 3.781e-06
# alternative hypothesis: true difference in means is not equal to 0
# 95 percent confidence interval:
#  2.860369 6.893837
# sample estimates:
# mean of x mean of y
#  22.19598  17.31888

par(mfrow=c(2,1))

hist(as.vector(VIX[,4][mnt]),ylim=c(0,40),xlim=c(10,60),breaks=20)
hist(as.vector(VIX[,4][as.Date(setdiff(seq(as.Date("2000-01-01"),as.Date("2018-12-01"),by='months'),mnt))]),ylim=c(0,40),xlim=c(10,60),breaks=10)

# 半透明色ヒストグラム

hist(as.vector(VIX[,4][as.Date(setdiff(seq(as.Date("2000-01-01"),as.Date("2018-12-01"),by='months'),mnt))]),ylim=c(0,40),xlim=c(10,60),breaks=10,col=2)
par(new=T)
hist(as.vector(VIX[,4][mnt]),ylim=c(0,40),xlim=c(10,60),breaks=20,col=rgb(0, 1, 0, alpha=0.1))

                              OR

hist(as.vector(VIX[,4][as.Date(setdiff(seq(as.Date("2000-01-01"),as.Date("2018-12-01"),by='months'),mnt))]),ylim=c(0,30),xlim=c(10,60),breaks=10,col=rgb(1, 0, 0, alpha=0.5))
par(new=T)
hist(as.vector(VIX[,4][mnt]),ylim=c(0,30),xlim=c(10,60),breaks=20,col=rgb(0, 0, 1, alpha=0.1))

Red for positive and Blue for negative.



change color scheme and transparency. 





2019年3月11日月曜日

CLI 6 month delta and S&P 500 monthly return. OR and VIX


> mnt <- index(cli_xts$oecd["2000::2018"][cli_xts$oecd["2000::2018"] < 100 & cli_xts$oecd["2000::2018"]/as.vector(cli_xts$oecd["1999-07-01::2018-06-01"]) < 1])
> mnt
 [1] "2000-12-01" "2001-01-01" "2001-02-01" "2001-03-01" "2001-04-01" "2001-05-01" "2001-06-01" "2001-07-01"
 [9] "2001-08-01" "2001-09-01" "2001-10-01" "2001-11-01" "2002-08-01" "2002-09-01" "2002-10-01" "2002-11-01"
[17] "2002-12-01" "2003-01-01" "2003-02-01" "2003-03-01" "2003-04-01" "2003-05-01" "2008-06-01" "2008-07-01"
[25] "2008-08-01" "2008-09-01" "2008-10-01" "2008-11-01" "2008-12-01" "2009-01-01" "2009-02-01" "2009-03-01"
[33] "2009-04-01" "2009-05-01" "2011-08-01" "2011-09-01" "2011-10-01" "2011-11-01" "2011-12-01" "2012-01-01"
[41] "2012-02-01" "2012-06-01" "2012-07-01" "2012-08-01" "2012-09-01" "2012-10-01" "2012-11-01" "2012-12-01"
[49] "2015-09-01" "2015-10-01" "2015-11-01" "2015-12-01" "2016-01-01" "2016-02-01" "2016-03-01" "2016-04-01"
[57] "2016-05-01" "2016-06-01" "2016-07-01" "2018-07-01" "2018-08-01" "2018-09-01" "2018-10-01" "2018-11-01"
[65] "2018-12-01"
> plot.zoo(merge(SP5["2000::"][,4]/SP5["2000::"][,1]-1,SP5[mnt][,4]/SP5[mnt][,1]-1),type='h',col = c("red", "blue"), plot.type = "single")



mnt <- index(cli_xts$oecd["2000::2018"][cli_xts$oecd["2000::2018"] < 100 & cli_xts$oecd["2000::2018"]/as.vector(cli_xts$oecd["1999-07-01::2018-06-01"]) < 1])
plot.zoo(merge(SP5["2000::2018"][,4]/SP5["2000::2018"][,1]-1,SP5[mnt][,4]/SP5[mnt][,1]-1),type='h',col = c("red", "blue"), plot.type = "single")




mnt <- index(cli_xts$oecd["2000::2018"][cli_xts$oecd["2000::2018"]/as.vector(cli_xts$oecd["1999-07-01::2018-06-01"]) < 1])
plot.zoo(merge(SP5["2000::2018"][,4]/SP5["2000::2018"][,1]-1,SP5[mnt][,4]/SP5[mnt][,1]-1),type='h',col = c("red", "blue"), plot.type = "single")








plot.zoo(merge(VIX["2000::2018"][,4],VIX[mnt][,4]),type='h',col = c("red", "blue"), plot.type = "single")
abline(v=seq(as.Date("2001-01-01"),as.Date("2019-01-01"),by='years'), col='green')





2019年2月19日火曜日

CLI vs. 1 month delta + legend



plot.default(diff(cli_xts$oecd["2014-12-01::"])[-1],cli_xts$oecd["2015::"],type='b')
tmp <- par('usr')
plot.default(diff(cli_xts$oecd["2014-12-01::"])[-1],cli_xts$oecd["2015::"],type='b',xlim=c( tmp[1],tmp[2]), ylim=c(tmp[3], tmp[4]))
par(new=T)
plot.default(diff(cli_xts$oecd["2016-12-01::2017"])[-1],cli_xts$oecd["2017"],xlim=c( tmp[1],tmp[2]), ylim=c(tmp[3], tmp[4]),col=3,lwd=2)
par(new=T)
plot.default(diff(cli_xts$oecd["2015-12-01::2016"])[-1],cli_xts$oecd["2016"],xlim=c( tmp[1],tmp[2]), ylim=c(tmp[3], tmp[4]),col=4,lwd=2)
par(new=T)
plot.default(diff(cli_xts$oecd["2015-12-01::2016"])[-1],cli_xts$oecd["2016"],xlim=c( tmp[1],tmp[2]), ylim=c(tmp[3], tmp[4]),col=5,lwd=2)
par(new=T)
plot.default(diff(cli_xts$oecd["2014-12-01::2015"])[-1],cli_xts$oecd["2015"],xlim=c( tmp[1],tmp[2]), ylim=c(tmp[3], tmp[4]),col=6,lwd=2)
par(new=T)
plot.default(diff(cli_xts$oecd["2017-12-01::"])[-1],cli_xts$oecd["2018"],xlim=c( tmp[1],tmp[2]), ylim=c(tmp[3], tmp[4]),col=2,lwd=2)
#
#    belows are for the case the data after 2019 are released. comment out 2 ines above, and remove comment below 4 lines.
#
# par(new=T)
# plot.default(diff(cli_xts$oecd["2017-12-01::2018"])[-1],cli_xts$oecd["2018"],xlim=c( tmp[1],tmp[2]), ylim=c(tmp[3], tmp[4]),col=7,lwd=2)
# par(new=T)
# plot.default(diff(cli_xts$oecd["2018-12-01::"])[-1],cli_xts$oecd["2019"],xlim=c( tmp[1],tmp[2]), ylim=c(tmp[3], tmp[4]),col=2,lwd=2)
#
abline(v=0)
abline(h=100)
legend("topleft", legend = "Pink: 2015\nLight Blue: 2016\nLime: 2017\nRed: 2018",bty='n')



2019年2月17日日曜日

plot abline eps GSPC




plot.default(diff(eps_year_xts["2007::2018"])[-1],type='h',axes=F)
par(new=T)
qtr <- seq(as.Date("2007-04-01"),as.Date("2018-12-31"),by='quarters')
plot.default(qtr,to.quarterly(GSPC["2007-04::2018"])[,4],type='l')
abline(v=as.Date("2018-01-01"),col = "gray60",lty=3)
abline(v=as.Date("2015-01-01"),col = "gray60",lty=3)
abline(v=as.Date("2017-01-01"),col = "gray60",lty=3)
abline(v=as.Date("2016-01-01"),col = "gray60",lty=3)
abline(h=2000,col="gray60",lty=3)
abline(v=as.Date("2014-01-01"),col = "gray60",lty=3)


For the longer period, use SP5 instead of GSPC

plot.default(diff(eps_year_xts["1992::2018"])[-1],type='h',axes=F,col=3)
par(new=T)
qtr <- seq(as.Date("1992-04-01"),as.Date("2018-12-31"),by='quarters')
plot.default(qtr,to.quarterly(SP5["1992-04::2018"])[,4],type='l')
abline(v=as.Date("2018-01-01"),col = "gray60",lty=3)
abline(v=as.Date("2015-01-01"),col = "gray60",lty=3)
abline(v=as.Date("2017-01-01"),col = "gray60",lty=3)
abline(v=as.Date("2016-01-01"),col = "gray60",lty=3)
abline(h=2000,col="gray60",lty=3)
abline(v=as.Date("2014-01-01"),col = "gray60",lty=3)




To see the relation between S&P500 and payroll. Change frequency from quarterly to monthly.

plot.default(diff(PA["1992::2018"])[-1],type='h',axes=F,col=3)
par(new=T)
mnt <- seq(as.Date("1992-02-01"),as.Date("2018-12-31"),by='months')
plot.default(mnt,to.monthly(SP5["1992-02::2018"])[,4],type='l')
abline(v=as.Date("2018-01-01"),col = "gray60",lty=3)
abline(v=as.Date("2015-01-01"),col = "gray60",lty=3)
abline(v=as.Date("2017-01-01"),col = "gray60",lty=3)
abline(v=as.Date("2016-01-01"),col = "gray60",lty=3)
abline(h=2000,col="gray60",lty=3)
abline(v=as.Date("2014-01-01"),col = "gray60",lty=3)
par(new=T)
plot.default(mnt,as.vector(cli_xts$oecd["1992-02::2018"]),axes=F,col=4,type='l')
par(new=T)
plot.default(mnt,na.trim(diff(cli_xts$oecd["1991-08::2018"],lag=6)),axes=F,col=6,type='l')