Skip to content

Instantly share code, notes, and snippets.

@mohzeki222
Last active May 20, 2024 07:59
Show Gist options
  • Save mohzeki222/01e9708eebfba6c4ce6e0846d3bed4ac to your computer and use it in GitHub Desktop.
Save mohzeki222/01e9708eebfba6c4ce6e0846d3bed4ac to your computer and use it in GitHub Desktop.
qc4u2_day5_pred.ipynb
Display the source blob
Display the rendered blob
Raw
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"private_outputs": true,
"provenance": [],
"gpuType": "V100",
"authorship_tag": "ABX9TyPrFhRs1AIL+H+7RaOrKiVi",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
},
"accelerator": "GPU"
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/gist/mohzeki222/01e9708eebfba6c4ce6e0846d3bed4ac/qc4u2_day5_pred.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"source": [
"# 金融市場の予測を量子コンピュータでやってみよう!\n",
"\n"
],
"metadata": {
"id": "ROu1giq4Il13"
}
},
{
"cell_type": "markdown",
"source": [
"[https://arxiv.org/pdf/2204.06150.pdf]\n",
"\n",
"今回のテーマはこちらの論文が参考文献となります。\n",
"Hamiltonian Learningというものです。"
],
"metadata": {
"id": "VKtoo9zW3utS"
}
},
{
"cell_type": "markdown",
"source": [
"まずはいつものpennylaneを準備してみましょう。"
],
"metadata": {
"id": "xuP3D1hmI4nK"
}
},
{
"cell_type": "code",
"source": [
"!pip install pennylane"
],
"metadata": {
"id": "Due7XQcyIx5u"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"また今回は比較的長い時間のかかるタスクを実行しますので、途中の時間経過がわかるようにtqdmを利用します。"
],
"metadata": {
"id": "XuWrmNlgTpq-"
}
},
{
"cell_type": "code",
"source": [
"import tqdm"
],
"metadata": {
"id": "38QtZESaTx9b"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"# 金融データから確率行列を作る\n",
"\n",
"金融データを予測する量子機械学習の例を実践してみます。"
],
"metadata": {
"id": "Hd-8Nydp6Ndw"
}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "jeqxA2RE5yAg"
},
"outputs": [],
"source": [
"#データ読み込みスタート時点\n",
"TRAIN_START ='2016-01-01'\n",
"#データ読み込み終了時点\n",
"TRAIN_END = '2016-12-31'"
]
},
{
"cell_type": "code",
"source": [
"#予測開始時点\n",
"PRED_START ='2017-01-01'\n",
"#予測終了時点\n",
"PRED_END = '2018-03-01'"
],
"metadata": {
"id": "EtDQQ0hb6azC"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"データ読み込みによく利用するフレームワークでpandasを利用します。\n",
"特に金融データを読み取る場合にdatareaderを活用します。"
],
"metadata": {
"id": "FE_oCym76pa0"
}
},
{
"cell_type": "code",
"source": [
"from pandas_datareader import data as pdr\n",
"import yfinance as yf\n",
"#アメリカのYahoo! financeを利用できるように\n",
"yf.pdr_override()\n",
"df1 = pdr.get_data_yahoo(\"IBM\", start=TRAIN_START, end=TRAIN_END)\n",
"df2 = pdr.get_data_yahoo(\"GOOG\", start=TRAIN_START, end=TRAIN_END)"
],
"metadata": {
"id": "N1B27kYj6mZQ"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"df_ref1 = pdr.get_data_yahoo(\"IBM\", start=PRED_START, end=PRED_END)\n",
"df_ref2 = pdr.get_data_yahoo(\"GOOG\", start=PRED_START, end=PRED_END)"
],
"metadata": {
"id": "oSfrOYD76-HR"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"dfやdf_refと入力するとどんなデータが取得できたかを調べることができます。"
],
"metadata": {
"id": "CL55D-jP8TDY"
}
},
{
"cell_type": "code",
"source": [
"df1"
],
"metadata": {
"id": "4zctGAV58piR"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"この読み込んだデータから必要情報だけを抜き取る空リストを用意します。"
],
"metadata": {
"id": "wJf_sWm48N8g"
}
},
{
"cell_type": "code",
"source": [
"cdata1 = []\n",
"cdata2 = []\n",
"cdata_ref1 = []\n",
"cdata_ref2 = []"
],
"metadata": {
"id": "Q9F31g1O8E-C"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#終値だけ取得\n",
"for i in df1['Close']:\n",
" cdata1.append(i)\n",
"for i in df2['Close']:\n",
" cdata2.append(i)\n",
"\n",
"for i in df_ref1['Close']:\n",
" cdata_ref1.append(i)\n",
"for i in df_ref2['Close']:\n",
" cdata_ref2.append(i)"
],
"metadata": {
"id": "nRyGUVTm8WEm"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"data1 = []\n",
"data2 = []\n",
"#繋げたデータを作成\n",
"for i in range(len(cdata1)):\n",
" data1.append(cdata1[i])\n",
" data2.append(cdata2[i])\n",
"for i in range(30):\n",
" data1.append(cdata_ref1[i])\n",
" data2.append(cdata_ref2[i])"
],
"metadata": {
"id": "n5swqKLt8uZM"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"対数をとってスケーリング。\n",
"その上で差分を取る。"
],
"metadata": {
"id": "qTJ77s_S9hbP"
}
},
{
"cell_type": "markdown",
"source": [
"ここでnumpyを読み込む際に、pennylaneのnumpyを読むことに注意(requires_gradをはじめ、回路の学習に用いるパラメータをnumpyを利用して初期化したものを利用することができる)"
],
"metadata": {
"id": "9TffmcyTl-qs"
}
},
{
"cell_type": "code",
"source": [
"from pennylane import numpy as np\n",
"\n",
"diff_log1 = np.log(data1)\n",
"diff_log2 = np.log(data2)\n",
"diff_series1 = np.diff(diff_log1)\n",
"diff_series2 = np.diff(diff_log2)\n",
"\n",
"#最後の値\n",
"x1_log1 = diff_log1[-30]\n",
"x1_log2 = diff_log2[-30]"
],
"metadata": {
"id": "1f67N10a9V2B"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"この上記のデータを適切な離散化を行い\n",
"遷移行列という形で保持します。\n",
"与えられたデータに対して、最大値と最小値を調べ、その範囲をparで指定した数で区別して、その中でどのように飛び移っているのかを調べることにしましょう。\n"
],
"metadata": {
"id": "g59A8UXl-PvD"
}
},
{
"cell_type": "code",
"source": [
"par = 2\n",
"def make_range(y, par=par):\n",
" max_y = max(y)\n",
" min_y = min(y)\n",
" range_list = np.percentile(y, q = list(range(0,100,int(100/par)))[1:])\n",
"\n",
" temp_list = np.concatenate([[min_y],range_list])\n",
" temp_list = np.concatenate([temp_list,[max_y]])\n",
"\n",
" ave_list = []\n",
" for k in range(len(temp_list)-1):\n",
" ave_list.append((temp_list[k]+temp_list[k+1])/2)\n",
"\n",
" return range_list,ave_list"
],
"metadata": {
"id": "ebZG1ETT-HbZ"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"早速実行してrange_listとして保持しましょう。"
],
"metadata": {
"id": "pa2eemdHDDhP"
}
},
{
"cell_type": "code",
"source": [
"range_list1,ave_list1 = make_range(diff_series1[-30:])\n",
"range_list2,ave_list2 = make_range(diff_series2[-30:])"
],
"metadata": {
"id": "yw8Mf61kBFEY"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"しっかり区分けされているのか調べてみましょう。\n",
"matplotlibで図示します。"
],
"metadata": {
"id": "8lN60C1LDGBt"
}
},
{
"cell_type": "code",
"source": [
"import matplotlib.pyplot as plt\n",
"plt.plot(diff_series1)\n",
"plt.plot(range(len(diff_series1)),range_list1[0]*np.ones(len(diff_series1)))\n",
"plt.plot(range(len(diff_series1)),ave_list1[0]*np.ones(len(diff_series1)))\n",
"plt.plot(range(len(diff_series1)),ave_list1[1]*np.ones(len(diff_series1)))\n",
"plt.show()"
],
"metadata": {
"id": "9q-t9gLdAV5g"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"def make_discrete(y,range_list,par=par):\n",
" dis_time = []\n",
" for v in y:\n",
" for k in range(par):\n",
" if k == 0:\n",
" if v < range_list[k]:\n",
" dis_time.append(k)\n",
" elif k == par - 1:\n",
" if range_list[-1] <= v:\n",
" dis_time.append(k)\n",
" else:\n",
" if range_list[k-1] <= v < range_list[k]:\n",
" dis_time.append(k)\n",
"\n",
" return dis_time"
],
"metadata": {
"id": "ZkyHBp0wBuub"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"dis_time1 = make_discrete(diff_series1,range_list1)\n",
"dis_time2 = make_discrete(diff_series2,range_list2)"
],
"metadata": {
"id": "0pQCEqFMCOzt"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"それでは準備ができましたので、遷移行列を求めることにしましょう。\n",
"まずはその行列の準備をします。"
],
"metadata": {
"id": "hCQc5f8CEE02"
}
},
{
"cell_type": "code",
"source": [
"mat1 = np.zeros([par,par])"
],
"metadata": {
"id": "E_h7m3CXDiW8"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"遷移行列というのはある時間的推移があったときに、\n",
"その回数をまとめたものです。\n",
"1から2に遷移した場合にはmat[1][2]に+1、3から2に遷移した場合にはmat[3][2]に+1としていきます。"
],
"metadata": {
"id": "6j24Nwg1ErMD"
}
},
{
"cell_type": "code",
"source": [
"for time in range(len(dis_time1)-30):#端は省く\n",
" t1 = dis_time1[time]\n",
" t2 = dis_time1[time+1]\n",
" mat1[t1][t2] +=1"
],
"metadata": {
"id": "dqh1qdnAETxd"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"mat[t1][t2]のうち、t1を現在の時刻として、t2を次の時刻と考えます。\n",
"各行について規格化(比率にする)をします。\n",
"足し算をして合計値から割り算をすれば比率を出すことができます。\n",
"まずは各行の合計遷移数を計算します。"
],
"metadata": {
"id": "XdMOlkqIE6lQ"
}
},
{
"cell_type": "code",
"source": [
"z_list1 = []\n",
"for p in range(par):\n",
" z_list1.append([sum(mat1[p,:])])"
],
"metadata": {
"id": "5KRFwySwEiCW"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"その後に割り算を次のように行うと、各行ごとで割り算を実行してくれます。"
],
"metadata": {
"id": "Qx39Ip4eHbDH"
}
},
{
"cell_type": "code",
"source": [
"mat1 = mat1/np.array(z_list1)"
],
"metadata": {
"id": "bKsHRQvlEjcI"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"これらを統合して以下のように関数にしてまとめておきます。"
],
"metadata": {
"id": "DiJIuu9SPwd1"
}
},
{
"cell_type": "code",
"source": [
"def make_mat(dis_time,par=par):\n",
" mat = np.zeros([par,par])\n",
" for time in range(len(dis_time)-30):#端は省く\n",
" t1 = dis_time[time]\n",
" t2 = dis_time[time+1]\n",
" mat[t1][t2] +=1\n",
"\n",
" z_list = []\n",
" for p in range(par):\n",
" z_list.append([sum(mat[p,:])])\n",
"\n",
" mat = mat/np.array(z_list)\n",
"\n",
" return mat"
],
"metadata": {
"id": "0w5lx2nnPzvj"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"mat1 = make_mat(dis_time1)\n",
"mat2 = make_mat(dis_time2)"
],
"metadata": {
"id": "RiPNvIDXWXlN"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"この行列は次の時刻への遷移を確率により決める確率行列というものになります。これを何度も掛け算するとk回掛け算するとk時刻後の様子を確率で示してくれる行列過程を表します。\n",
"\n",
"実際の金融データから得られたものですので、金融市場のシミュレーションを行うような行列ということになります。"
],
"metadata": {
"id": "1gKB6-1tH6O-"
}
},
{
"cell_type": "code",
"source": [
"def make_mat_k(mat,k):\n",
" mat = np.matrix(mat)\n",
" return mat**(k+1)"
],
"metadata": {
"id": "hbH4DkZnG-zm"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"ここで利用したnp.matrixは、数字と同じように冪乗を**kという形で書くと、行列のk乗を実行してくれます。"
],
"metadata": {
"id": "AsNjCjQaIbEX"
}
},
{
"cell_type": "markdown",
"source": [
"# 量子回路を組む\n",
"\n",
"それではpennylaneを利用して量子回路を組んでみましょう。\n",
"今回は入力された数値を2進数で展開して、その数値に応じてXを適用し、\n",
"BasicEntanglerLayersと似たStronglyEntanglingLayersを利用します。\n",
"こちらではX,Y,Zそれぞれの軸周りで回転をさせたのちに、制御Xゲートを利用してエンタングルメントを生成します。\n",
"入力される数値は前後の確率的な変動を利用します。それをstate1,state2とします。\n",
"さらに用意されたパラメータgammaに応じて、Z軸周りで回転させます。\n",
"こののち、もう一度self.adjointでStrongleEntanglingLayersを適用します。\n",
"\n",
"最後に測定をした結果を利用して量子回路の出力とします。\n",
"interface=\"autograd\"とすることで、PyTorchと同様に、自動微分が行われて量子回路の最適化を実行することができるようにします。"
],
"metadata": {
"id": "3Axw9JwVJCQL"
}
},
{
"cell_type": "code",
"source": [
"import pennylane as qml\n",
"from pennylane.templates import StronglyEntanglingLayers\n",
"\n",
"#測定する量子ビット数\n",
"qubits_data = int(np.log2(par))*2\n",
"#補助系の量子ビット数\n",
"qubits_data_ancilla = qubits_data\n",
"#合計の量子ビット数\n",
"n_qubits = qubits_data + qubits_data_ancilla\n",
"\n",
"dev = qml.device('default.qubit', wires = n_qubits)\n",
"@qml.qnode(dev, interface=\"autograd\")\n",
"def circuit(weights, gamma, state1, state2, t, qubits_data, qubits_data_ancilla):\n",
" n_qubits = qubits_data + qubits_data_ancilla\n",
" qubits_data1 =int(qubits_data/2)\n",
" qubits_data2 =int(qubits_data/2)\n",
"\n",
" #2進数で数値を0と1に変換してその数に応じてXを実行\n",
" bin_str = '0'+str(int(np.log2(par)))+'b'#ビットの桁数指定\n",
"\n",
" for i in range(qubits_data1):\n",
" qml.PauliX(i)**int(format(state1, bin_str)[i])\n",
"\n",
" for j in range(qubits_data2):\n",
" qml.PauliX(j+qubits_data1)**int(format(state2, bin_str)[j])\n",
"\n",
" qml.StronglyEntanglingLayers(weights, wires=range(n_qubits))#Ansatz\n",
"\n",
" for i in range(n_qubits):\n",
" qml.RZ(t*gamma[i], wires= i)#Ansatz2\n",
"\n",
" qml.adjoint(StronglyEntanglingLayers)(weights, wires=range(n_qubits))\n",
"\n",
" return [qml.probs(wires=[i for i in range(qubits_data1)]), qml.probs(wires=[i+qubits_data1 for i in range(qubits_data2)])]\n",
"\n",
"Qnode = qml.QNode(circuit,dev)"
],
"metadata": {
"id": "eBaT4_5TIaU3"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"次に量子回路を学習する際に基準となるcriterionとして以下のコスト関数を用意します。\n",
"\n",
"量子回路から確率分布を得ることができました。\n",
"その数値と学習したい確率分布を合わせたいということでKL情報量を最小化することを考えます。\n",
"全時刻におけるKL情報量を足し合わせてコスト関数とします。"
],
"metadata": {
"id": "tEj_u4wZObjh"
}
},
{
"cell_type": "code",
"source": [
"def cost(weights_ansatz, gamma, qubits_data, qubits_data_ancilla, mat1, mat2, Tmat = 30, ep = 10e-5):\n",
" c = 0\n",
" for t in range(Tmat):\n",
" for s1 in range(par):\n",
" for s2 in range(par):\n",
" [output1,output2] = Qnode(weights_ansatz, gamma, s1, s2, t, qubits_data, qubits_data_ancilla)\n",
" mat_k1 = np.array(make_mat_k(mat1, t))#データ1の遷移行列\n",
" mat_k2 = np.array(make_mat_k(mat2, t))#データ2の遷移行列\n",
"\n",
" for i in range(len(output1)):\n",
" c += output1[i]*np.log(output1[i]/(mat_k1[i][s1]+ep))\n",
" #print(output1[i]*np.log(output1[i]/(mat_k1[i][s1]+ep)))\n",
" for i in range(len(output2)):\n",
" c += output2[i]*np.log(output2[i]/(mat_k2[i][s1]+ep))\n",
" #print(output2[i],mat_k1[i][s2],output2[i]*np.log(output2[i]/(mat_k2[i][s2]+ep)))\n",
"\n",
" return c\n"
],
"metadata": {
"id": "l1fKiYavMVKA"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"これで準備完了です。\n",
"ニューラルネットワークの学習の際に行ったようにoptimizerの設定を行います。\n",
"pennylaneでも同じようにoptimizerが存在します。"
],
"metadata": {
"id": "d8jJumgPR7ZG"
}
},
{
"cell_type": "code",
"source": [
"max_steps = 20\n",
"opt = qml.AdamOptimizer(0.1)"
],
"metadata": {
"id": "1MzEaOsPRHUN"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"次にパラメータの準備を行いましょう。\n"
],
"metadata": {
"id": "bT6MkEldSefc"
}
},
{
"cell_type": "code",
"source": [
"shape = qml.StronglyEntanglingLayers.shape(n_layers = 1, n_wires=n_qubits)\n",
"weights_ansatz = 2 * np.pi * np.random.random(size=shape)\n",
"gamma = 2 * np.pi * np.random.random(n_qubits)"
],
"metadata": {
"id": "UVimnChBSgnO"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"ちなみにこのパラメータ、無造作にnumpyで初期化したものを利用しているが、requires_gradを調べると(通常のnumpyにはない)Trueとなっており、PyTorchでいうtensorと同じように扱うことができる。"
],
"metadata": {
"id": "eB32_w2VmOoP"
}
},
{
"cell_type": "code",
"source": [
"gamma.requires_grad"
],
"metadata": {
"id": "YWneuCg6lz8J"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"学習をしたい金融データから決まった確率行列matと量子回路の間の距離を最小化するためにコスト関数の記録をつけましょう。"
],
"metadata": {
"id": "At-93YSbSoFg"
}
},
{
"cell_type": "code",
"source": [
"cost_series = []"
],
"metadata": {
"id": "ns1vNZ7hSkX4"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"for step in tqdm.tqdm(range(max_steps)):\n",
" cost_temp = cost(weights_ansatz, gamma, qubits_data, qubits_data_ancilla, mat1, mat2)\n",
" cost_series.append(cost_temp)\n",
" weights_ansatz, gamma, _, _, _, _ = opt.step(cost,weights_ansatz, gamma, qubits_data, qubits_data_ancilla, mat1, mat2)\n",
" print(weights_ansatz)\n"
],
"metadata": {
"id": "RXTAHpe9S4XF"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"しっかりとコスト関数が減少している様子を確認しておきましょう。"
],
"metadata": {
"id": "HqEQY71ouQjb"
}
},
{
"cell_type": "code",
"source": [
"plt.plot(cost_series)"
],
"metadata": {
"id": "m24kzEDvXJB6"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"さて出来上がった量子回路で未来の株価を予測してみましょう。\n",
"そのために最後の時刻の株価を出発点に実際に量子回路の出力から予測をしてみます。"
],
"metadata": {
"id": "9_THdrcLuVyn"
}
},
{
"cell_type": "markdown",
"source": [
"のちの10時刻分だけ予測してみましょう。\n",
"量子回路からは確率の数値だけが得られますので、その数値に従い乱数を生成して予測とします。\n"
],
"metadata": {
"id": "CRqwcwxpuuJE"
}
},
{
"cell_type": "code",
"source": [
"Tpred = 10\n",
"Nsample = 100\n",
"state_sample1 = []\n",
"state_sample2 = []\n",
"\n",
"for sample in range(Nsample):\n",
" state1 = dis_time1[-30]\n",
" state2 = dis_time2[-30]\n",
" state_series1 = []\n",
" state_series2 = []\n",
" for t in range(Tpred):\n",
" prob1,prob2 = Qnode(weights_ansatz, gamma, state1, state2, t, qubits_data, qubits_data_ancilla)\n",
" state1 = np.random.choice(par, p=prob1)#確率で選択\n",
" state2 = np.random.choice(par, p=prob2)#確率で選択\n",
" state_series1.append(state1)\n",
" state_series2.append(state2)\n",
" state_sample1.append(state_series1)\n",
" state_sample2.append(state_series2)"
],
"metadata": {
"id": "TRzxZ05yu4Io"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"これはあくまで区切られた区間(上下だけとか)の動きを示した変動の様子を示したのみです。\n",
"これを実際の株価変動と比較してプロットしてみましょう。\n",
"\n",
"少々ややこしいですが、state_series1or2に含まれるデータは0,1,,,の整数データであり、これにave_list1or2を通すと、実際の上下の変動幅に相当するものとなります。これを利用して、実際の値動きに変更する必要があります。"
],
"metadata": {
"id": "TcxlI4W0vqaq"
}
},
{
"cell_type": "code",
"source": [
"range_list1"
],
"metadata": {
"id": "7nW3KKqmtr8d"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"pred_sample1 = []\n",
"for state_series1 in state_sample1:\n",
" pred_series1 = []\n",
" for itemp in state_series1:\n",
" pred_series1.append(ave_list1[itemp])\n",
" pred_sample1.append(pred_series1)\n",
"\n",
"pred_sample2 = []\n",
"for state_series2 in state_sample2:\n",
" pred_series2 = []\n",
" for itemp in state_series2:\n",
" pred_series2.append(ave_list2[itemp])\n",
" pred_sample2.append(pred_series2)"
],
"metadata": {
"id": "yh18D-dOwtL6"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"いくつかサンプリングした結果を平均して、\n",
"なめらかな変動として予測結果を出すことにしましょう。"
],
"metadata": {
"id": "S9pCufCirmQw"
}
},
{
"cell_type": "code",
"source": [
"ave1 = np.zeros(len(pred_series1))\n",
"for pred_series1 in pred_sample1:\n",
" ave1 += np.array(pred_series1)\n",
"ave1 = ave1/len(pred_sample1)\n",
"\n",
"ave2 = np.zeros(len(pred_series2))\n",
"for pred_series2 in pred_sample2:\n",
" ave2 += np.array(pred_series2)\n",
"ave2 = ave2/len(pred_sample2)"
],
"metadata": {
"id": "SgU9JD8BxJgF"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"pred_series1"
],
"metadata": {
"id": "SqIp-9k8osD0"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"変動の積算をするためにcumsum関数を使うと便利です。"
],
"metadata": {
"id": "fJeHTjnZrtG7"
}
},
{
"cell_type": "code",
"source": [
"log_recon1 = np.cumsum(ave1)\n",
"log_recon2 = np.cumsum(ave2)"
],
"metadata": {
"id": "eZxT7C-p0S0T"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"その結果をプロットしてみましょう。"
],
"metadata": {
"id": "oL-3zC78rx7X"
}
},
{
"cell_type": "code",
"source": [
"plt.plot(np.exp(log_recon1))\n",
"plt.plot(np.exp(log_recon2))"
],
"metadata": {
"id": "EjydXlmL0rlh"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"学習した時系列以降の値動きの変動について、予測結果をプロットをしてみましょう。"
],
"metadata": {
"id": "8FXZPUmQu2ED"
}
},
{
"cell_type": "code",
"source": [
"Start = 30\n",
"plt.plot(data1[-40:])\n",
"plt.plot(range(Start,Start+10),data1[-10]*np.exp(log_recon1-log_recon1[0]))"
],
"metadata": {
"id": "2j_Ziz-Ndbme"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"plt.plot(data2[-40:])\n",
"plt.plot(range(Start,Start+10),data2[-10]*np.exp(log_recon2-log_recon2[0]))"
],
"metadata": {
"id": "DJXdogybfCY4"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"par=2ですので大した予測制度はありませんが、parを増やしてみてじっくりと学習してみるといかがでしょうか。\n",
"なかなかの精度が出るかと思います。"
],
"metadata": {
"id": "knIaH1Bcu8ou"
}
},
{
"cell_type": "code",
"source": [],
"metadata": {
"id": "TUg3BgRyvDAN"
},
"execution_count": null,
"outputs": []
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment