程序问答   发布时间:2022-06-01  发布网站:大佬教程  code.js-code.com
大佬教程收集整理的这篇文章主要介绍了获取 ValueError:无法为张量“占位符”提供形状 (506, 1) 的值?大佬教程大佬觉得挺不错的,现在分享给大家,也给大家做个参考。

如何解决获取 ValueError:无法为张量“占位符”提供形状 (506, 1) 的值??

开发过程中遇到获取 ValueError:无法为张量“占位符”提供形状 (506, 1) 的值?的问题如何解决?下面主要结合日常开发的经验,给出你关于获取 ValueError:无法为张量“占位符”提供形状 (506, 1) 的值?的解决方法建议,希望对你解决获取 ValueError:无法为张量“占位符”提供形状 (506, 1) 的值?有所启发或帮助; @H_262_2@我正在尝试从 github 运行代码,但出现错误: '文件“/content/Deep_Learning_PreDiction_Intervals/code/main.py”,第 184 行,在 y=np.zeros_like(X_boundarY[i-1]),in_sess=TruE) 文件“/content/Deep_Learning_PreDiction_Intervals/code/DeepNetPI.py”,第 670 行,在预测中 y_pred_out = sess.run(self.y_pred,Feed_Dict={self.X: X,self.y_true: y,self.cens_R: censor_R_inD}) 运行中的文件“/tensorflow-1.15.2/python3.7/tensorflow_core/python/clIEnt/session.py”,第 956 行 run_Metadata_ptr) 文件“/tensorflow-1.15.2/python3.7/tensorflow_core/python/clIEnt/session.py”,第 1156 行,在 _run (np_val.shape,subFeed_t.name,str(subFeed_t.get_shape()))) ValueError: 无法为 Tensor 'Placeholder:0' 提供形状 (506,1) 的值,其形状为 '(?,13)'

@H_262_2@训练数据的形状为(506,13),验证数据的形状为(506,1)。main.py文件的代码如下:

type_in = 'boston'  # data type to use - drunk_bow_tIE x__gap ~boston concrete,##CHANGE THE DATA TYPE AS PER UR USE
loss_type = 'qd_soft'       # loss type to Train on - qd_soft mve mse (mse=simple point preDiction) ## checK FOR DIFFERENT LOSSES (gauss_like,qd_soft,msE)
n_samples = 10000   # if generaTing data,how many points to generate
h_size = [100]  # number of hIDden units in network: [50]=layer_1 of 50,[8,4]=layer_1 of 8,layer_2 of 4
Alpha = 0.05        # data points captured = (1 - Alpha) ## FOR 95% PREDicTION INTERVAL
n_epoch = 300   # number epochs to Train for
optim = 'adam'      # opitimiser - SGD adam ## ITS GRADIENT DESCENT BASED OPTIMISER 
l_rate = 0.02    # learning rate of optimiser
decay_rate=0.9  # learning rate decay
soften = 160.       # hyper param for QD_soft
lambda_in = 15.     # hyper param for QD_soft
sigma_in=0.1        # initialise std dev of NN weights
is_run_test=True    # if averaging over lots of runs - turns off some prints and graphs
n_ensemble=5        # number of indivIDual NNs in ensemble ## checK
n_bootstraps=1      # how many boostrap resamples to perform  ## pick one sample and again put ANOTHER SAMPLE
n_runs=1
# if is_run_test else 1
is_batch=True       # Train in batches?
n_batch=100         # batch size
lube_perc=90.       # if model uncertainty method = perc - 50 to 100
perc_or_norm='norm' # model uncertainty method - perc norm (paper uses norm)
is_early_stop=false # stop Training early (dIDn't use in paper)
is_bootstrap=false if n_bootstraps == 1 else True
Train_prop=0.9      # % of data to use as Training 90% DATA IS USED FOR TraiNING

ouT_Biases=[3.,-3.] # chose biases for output layer (for mve is overwritten to 0,1)
activation='relu'   # NN activation fns - tanh relu

# plotTing options
is_use_val=True
save_graphs=True 
show_graphs=True        
#   if is_run_test else True
show_Train=True
#   if is_run_test else True
is_y_rescale=false
is_y_sort=false
is_print_info=True
var_plot=0 # lets us plot against different variables,use 0 for univariate
is_err_bars=True
is_norm_plot=True 
is_boundary=True # boundary stuff ONLY works for univariate - turn off for larger
is_bound_val=false # plot valIDation points for boundary
is_bound_Train=True # plot Training points for boundary
is_bound_indiv=True # plot indivIDual boundary estimates
is_bound_IDeal=True # plot IDeal boundary
is_title=True # show title w metrics on graph
bound_limit=6. # how far to plot boundary

# resampling
bootstrap_method='replace_resample' # whether to boostrap or jacknife - prop_of_data replace_resample
prop_SELEct=0.8 # if jacknife (=prop_of_data),how much data to use each time

# other
in_ddof=1 if n_runs > 1 else 0 # this is for results over runs only

# pre calcs
if Alpha == 0.05: ## FOR 95% PIS
    n_std_devs = 1.96
elif Alpha == 0.10: ##FOR 90% PIS
    n_std_devs = 1.645
elif Alpha == 0.01: ## FOR 99% PIS
    n_std_devs = 2.575
else:
    raise Exception('ERROR unusual Alpha')

results_runs = []
run=0

# X_Train = []
# y_Train = []
# X_val = []
# y_val = []

for run in range(0,n_runs):
    # generate data
    Gen = DataGenerator(type_in="boston")   
    X_Train,y_Train,X_val,y_val = Gen.CreateData(n_samples=n_samples,seed_in=run,Train_prop=Train_prop,bound_limit=bound_limit,n_std_devs=n_std_devs)

    print('\n--- vIEw data ---')
    
    Gen.VIEwData(n_rows=5,hist=false,plot=falsE)

    X_boundary = []
    y_boundary = []
    y_pred_all = []
    
    X_Train_orig,y_Train_orig = X_Train,y_Train
    for b in range(0,n_bootstraps):

        # bootstrap sample
        if is_bootstrap:
            np.random.seed(b)
            if bootstrap_method=='replace_resample':
                # resample w replacement method
                ID = np.random.choice(X_Train_orig.shape[0],X_Train_orig.shape[0],replace=TruE)
                X_Train = X_Train_orig[ID]
                y_Train = y_Train_orig[ID]

            elif bootstrap_method=='prop_of_data':
                # SELEct x% of data each time NO resampling
                perm = np.random.permutation(X_Train_orig.shape[0])
                X_Train = X_Train_orig[perm[:int(perm.shape[0]*prop_SELEct)]]
                y_Train = y_Train_orig[perm[:int(perm.shape[0]*prop_SELEct)]]

        i=0
        while i < n_ensemble:
            is_Failed_run = false

            tf.reset_default_graph()
            sess = tf.Session()
            
            # info
            if is_print_info:
                print('\nrun number',run+1,' of ',n_runs,' -- bootstrap number',b+1,n_bootstraps,' -- ensemble number',i+1,n_ensemblE)

            # load network
            NN = TfNetwork(x_size=X_Train.shape[1],y_size=2,h_size=h_size,type_in="pred_intervals",Alpha=Alpha,loss_type=loss_type,soften=soften,lambda_in=lambda_in,sigma_in=sigma_in,activation=activation,bias_rand=false,ouT_Biases=ouT_Biases)

            # Train
            NN.Train(sess,X_Train,y_val,n_epoch=n_epoch,l_rate=l_rate,decay_rate=decay_rate,resume_Train=false,print_params=false,is_early_stop=is_early_stop,is_use_val=is_use_val,optim=optim,is_batch=is_batch,n_batch=n_batch,is_run_test=is_run_test,is_print_info=is_print_info)

            # visualise Training
            if show_Train:
                NN.vis_Train(save_graphs,is_use_val)

            # make preDictions
            y_loss,y_pred,y_metric,y_U_cap,y_U_prop,\
                y_L_cap,y_L_prop,y_all_cap,y_all_prop \
                = NN.preDict(sess,X=X_val,y=y_val,in_sess=TruE)

            # check whether the run Failed or not
            if np.abs(y_loss) > 20.:
            # if false:
                is_Failed_run = True
                print('\n\n### one messed up! repeaTing ensemble ###')
                conTinue # without saving!
            else:
                i+=1 # conTinue to next

            # save preDiction
            y_pred_all.append(y_pred)

            # preDicTing for boundary,need to do this for each model
            if is_boundary:
                X_boundary.append(np.linspace(start=-bound_limit,stop=bound_limit,num=506 )[:,np.newaxis])
                
                t,y_boundary_temp,t,t = NN.preDict(sess,X=X_boundarY[i-1],y=np.zeros_like(X_boundarY[i-1]),in_sess=TruE)
                y_boundary.append(y_boundary_temp)

            sess.close()

    # we may have preDicted with gauss_like or qd_soft,here we need to get estimates for 
    # upper/lower pi's AND gaussian params no matter which method we used (so can comparE)
    y_pred_all = np.array(y_pred_all)

    if loss_type == 'qd_soft':
        y_pred_gauss_mID,y_pred_gauss_dev,y_pred_U,\
            y_pred_L = pi_to_gauss(y_pred_all,lube_perc,perc_or_norm,n_std_devs)

    elif loss_type == 'gauss_like': # work out bounds given mu sigma
        y_pred_gauss_mID_all = y_pred_all[:,:,0]
        # occasionally may get -ves for std dev so need to do max
        y_pred_gauss_dev_all = np.sqrt(np.maximum(np.log(1.+np.exp(y_pred_all[:,1])),10e-6))
        y_pred_gauss_mID,\
            y_pred_L = gauss_to_pi(y_pred_gauss_mID_all,y_pred_gauss_dev_all,n_std_devs)

    elif loss_type == 'mse': # as for gauss_like but we don't kNow std dev so guess
        y_pred_gauss_mID_all = y_pred_all[:,0]
        y_pred_gauss_dev_all = np.zeros_like(y_pred_gauss_mID_all)+0.01
        y_pred_gauss_mID,n_std_devs)

    # work out metrics
    y_U_cap = y_pred_U > y_val.reshape(-1)
    y_L_cap = y_pred_L < y_val.reshape(-1)
    y_all_cap = y_U_caP*y_L_cap
    PICP = np.sum(y_all_cap)/y_L_cap.shape[0]
    MPIW = np.mean(y_pred_U - y_pred_L)
    y_pred_mID = np.mean((y_pred_U,y_pred_L),axis=0)
    MSE = np.mean(np.square(Gen.scale_c*(y_pred_mID - y_val[:,0])))
    RMSE = np.sqrt(MSE)
    CWC = np_QD_loss(y_val,y_pred_L,Alpha,soften,lambda_in)
    neg_log_like = gauss_neg_log_like(y_val,y_pred_gauss_mID,Gen.scale_C)
    resIDuals = resIDuals = y_pred_mID - y_val[:,0]
    shAPIro_W,shAPIro_p = stats.shAPIro(resIDuals[:])
    results_runs.append((PICP,MPIW,CWC,RMSE,neg_log_like,shAPIro_W,shAPIro_p))

    # concatenate for graphs
    title = 'PICP=' + str(round(PICP,3))\
                + ',MPIW=' + str(round(R_340_11845@PIW,3)) \
                + ',qd_loss=' + str(round(CWC,NLL=' + str(round(neg_log_like,Alpha=' + str(Alpha) \
                + ',loss=' + NN.loss_type \
                + ',data=' + type_in + ',' \
                + '\nh_size=' + str(NN.h_sizE) \
                + ',bstraps=' + str(n_bootstraps) \
                + ',ensemb=' + str(n_ensemblE) \
                + ',RMSE=' + str(round(RMSE,soft=' + str(NN.soften) \
                + ',lambda=' + str(NN.lambda_in)   

    # visualise
    if show_graphs:
        # error bars
        if is_err_bars:
            plot_err_bars(X_val,is_y_sort,is_y_rescale,Gen.scale_c,save_graphs,title,var_plot,is_titlE)

        # visualise boundary
        if is_boundary:
            y_bound_all=np.array(y_boundary)
            ploT_Boundary(y_bound_all,X_boundary,loss_type,Gen.y_IDeal_U,Gen.y_IDeal_L,Gen.X_IDeal,Gen.y_IDeal_mean,is_bound_IDeal,in_ddof,n_std_devs,is_bound_val,is_bound_Train,is_bound_indiv,is_titlE)

        # normal dist stuff
        if is_norm_plot:
            title = 'shAPIro_W=' + str(round(shAPIro_W,3)) + \
                ',data=' + type_in +',loss=' + NN.loss_type + \
                ',n_val=' + str(y_val.shape[0])
            fig,(ax1,ax2) = plt.subplots(2)
            ax1.set_xlabel('y_pred - y_val') # histogram
            ax1.hist(resIDuals,bins=30)
            ax1.set_title(title,Fontsize=10)
            stats.probplot(resIDuals[:],plot=ax2) # QQ plot
            ax2.set_title('')
            fig.show()

# sumarise results,print for paste to excel
print('\n\nn_samples,h_size,n_epoch,l_rate,decay_rate,lambda_in,sigma_in')
print(n_samples,sigma_in)
print('\n\ndata=',type_in,'loss_type=',loss_typE)
results_runs = np.array(results_runs)
metric_names= ['PICP','MPIW','CWC','RMSE','NLL','shap_W','shap_p']
print('runs\tboots\tensemb')
print(n_runs,'\t',n_ensemblE)
print('\tavg\tstd_err\tstd_dev')
for i in range(0,len(metric_names)): 
    avg = np.mean(results_runs[:,i])
    std_dev = np.std(results_runs[:,i],ddof=in_ddof)
    std_err = std_dev/np.sqrt(n_runs)
    print(metric_names[i],round(avg,3),round(std_err,round(std_dev,3))

# timing info
end_time = datetiR_340_11845@e.datetiR_340_11845@e.Now()
@R_643_1@R_489_11226@6@l_time = end_time - start_time
print('seconds taken:',round(@R_643_1@R_489_11226@6@l_time.@R_643_1@R_489_11226@6@l_seconds(),1),'\nstart_time:',start_time.strftime('%H:%M:%s'),'end_time:',end_time.strftime('%H:%M:%s'))

解决方法

@H_262_2@暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

@H_262_2@如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

@H_262_2@小编邮箱:dio#foxmail.com (将#修改为@)

大佬总结

以上是大佬教程为你收集整理的获取 ValueError:无法为张量“占位符”提供形状 (506, 1) 的值?全部内容,希望文章能够帮你解决获取 ValueError:无法为张量“占位符”提供形状 (506, 1) 的值?所遇到的程序开发问题。

如果觉得大佬教程网站内容还不错,欢迎将大佬教程推荐给程序员好友。

本图文内容来源于网友网络收集整理提供,作为学习参考使用,版权属于原作者。
如您有任何意见或建议可联系处理。小编QQ:384754419,请注明来意。