
我有这些功能和标签,它们不够线性,不能满足线性解决方案.我从sklearn训练了SVR(kernel =’rbf’)模型,但现在是时候用tensorflow来做了,很难说应该写什么来达到相同或更好的效果.
你看到那里的那条懒橙线吗?它并不能满足你的决心
代码本身:
import pandas as pdimport numpy as npimport tensorflow as tfimport tqdmimport matplotlib.pyplot as pltfrom omnicomm_data.test_data import get_model,clean_dfimport osfrom sklearn import preprocessinggraph = tf.get_default_graph()# tf variablesx_ = tf.placeholder(name="input",shape=[None,1],dtype=np.float32)y_ = tf.placeholder(name="output",dtype=np.float32)w = tf.Variable(tf.random_normal([]),name='weight')b = tf.Variable(tf.random_normal([]),name='bias')lin_model = tf.add(tf.multiply(x_,w),b)#lossloss = tf.reduce_mean(tf.pow(lin_model - y_,2),name='loss')train_step = tf.train.GradIEntDescentoptimizer(0.000000025).minimize(loss)#nonlinear partnonlin_model = tf.tanh(tf.add(tf.multiply(x_,b))nonlin_loss = tf.reduce_mean(tf.pow(nonlin_model - y_,name='cost')train_step_nonlin = tf.train.GradIEntDescentoptimizer(0.000000025).minimize(nonlin_loss) # pandas datadf_train = pd.read_csv('me_rate.csv',header=None)liters = df_train.iloc[:,0].values.reshape(-1,1)parrots = df_train.iloc[:,1].values.reshape(-1,1)#model for predictionmms = preprocessing.MinMaxScaler()rbf = get_model(path_to_model)n_epochs = 200train_errors = []non_train_errors = []test_errors = []with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for i in tqdm.tqdm(range(n_epochs)): _,train_err,summ = sess.run([train_step,loss,summarIEs],Feed_dict={x_: parrots,y_: liters}) summary_writer.add_summary(summ,i) train_errors.append(train_err) _,non_train_err,= sess.run([train_step_nonlin,nonlin_loss],y_: liters}) non_train_errors.append(non_train_err) plt.plot(List(range(n_epochs)),train_errors,label='train_lin') plt.plot(List(range(n_epochs)),non_train_errors,label='train_nonlin') plt.legend() print(train_errors[:10]) print(non_train_errors[:10]) plt.show() plt.scatter(parrots,liters,label='actual data') plt.plot(parrots,sess.run(lin_model,Feed_dict={x_: parrots}),label='linear (tf)') plt.plot(parrots,sess.run(nonlin_model,label='nonlinear (tf)') plt.plot(parrots,rbf.predict(mms.fit_transform(parrots)),label='rbf (sklearn)') plt.legend() plt.show()如何激励橙色线?
之后的部分.
代码如下:
import pandas as pdimport numpy as npimport tensorflow as tfimport tqdmimport matplotlib.pyplot as pltfrom omnicomm_data.test_data import get_modelimport osfrom sklearn import preprocessinggraph = tf.get_default_graph()# tf variablesx_ = tf.placeholder(name="input",name='bias')# nonlinearnonlin_model = tf.add(tf.multiply(tf.tanh(x_),b)nonlin_loss = tf.reduce_mean(tf.pow(nonlin_model - y_,name='cost')train_step_nonlin = tf.train.GradIEntDescentoptimizer(0.01).minimize(nonlin_loss)# pandas datadf_train = pd.read_csv('me_rate.csv',header=None)liters = df_train.iloc[:,1)#model for predictionmms = preprocessing.MinMaxScaler()rbf = get_model(path_to_model)nz = preprocessing.MaxAbsScaler() # normalization coz tanhnorm_parrots = nz.fit_transform(parrots)print(norm_parrots)n_epochs = 20000train_errors = []non_train_errors = []test_errors = []weights = []biases = []with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for i in tqdm.tqdm(range(n_epochs)): _,weight,bias = sess.run([train_step_nonlin,nonlin_loss,w,b],Feed_dict={x_: norm_parrots,y_: liters}) non_train_errors.append(non_train_err) weights.append(weight) biases.append(bias) plt.scatter(norm_parrots,label='actual data') plt.plot(norm_parrots,Feed_dict={x_: norm_parrots}),c='orange',label='nonlinear (tf)') plt.plot(norm_parrots,label='rbf (sklearn)') plt.legend() plt.show()Asyoucan清楚地看到我们对橙色线有了一些改进(不如rbf好,但它只需要更多的工作).最佳答案您正在使用tf.tanh作为激活,这意味着您的输出限制在[-1,1]范围内.因此它永远不适合您的数据.
编辑:我删除了一个注意到已经修复的拼写错误的部分.
总结以上是内存溢出为你收集整理的python – Tensorflow.非线性回归全部内容,希望文章能够帮你解决python – Tensorflow.非线性回归所遇到的程序开发问题。
如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。
欢迎分享,转载请注明来源:内存溢出
微信扫一扫
支付宝扫一扫
评论列表(0条)