大佬教程收集整理的这篇文章主要介绍了Tensorflow 2.2.0 Keras 子类模型可以训练和预测,但保存时抛出异常:维度大小必须均匀整除 X,大佬教程大佬觉得挺不错的,现在分享给大家,也给大家做个参考。
这个模型是多种CNN,使用因果扩张卷积层。
我可以训练和预测 0 错误,但是当我使用 @H_535_3@model.save() 保存模型时,它抛出异常。
所以我使用 save_weights 和 load_weights 来保存和加载模型。
不知道为什么会出现这个错误:
@H_944_19@model.save("path") @H_403_18@
出:
@H_944_19@ValueError: Dimension size must be evenly divisible by 2 but is 745 for '{{node conv1d_5/SpaCEToBatchND}} = SpaCEToBatchND[T=DT_float,Tblock_shape=DT_INT32,Tpaddings=DT_INT32](conv1d_5/Pad,conv1d_5/SpaCEToBatchND/block_shape,conv1d_5/SpaCEToBatchND/paddings)' with input shapes: [?,745,32],[1],[1,2] and with computed input tensors: input[1] = <2>,input[2] = <[0 0]>. @H_403_18@
输入形状为 (None,743,27)
输出形状为 (None,24,1)
@H_944_19@def slice(x,seq_length): return x[:,-seq_length:,:] class ResIDualBlock(tf.keras.layers.Layer): def __init__(self,n_filters,filter_wIDth,dilation_ratE): super(ResIDualBlock,self).__init__() self.n_filters = n_filters self.filter_wIDth = filter_wIDth self.dilation_rate = dilation_rate # preprocessing - equivalent to time-diStributed dense self.x = Conv1D(32,1,padding='same',activation='relu') # filter convolution self.x_f = Conv1D(filters=n_filters,kernel_size=filter_wIDth,padding='causal',dilation_rate=dilation_rate,activation='tanh') # gaTing convolution self.x_g = Conv1D(filters=n_filters,activation='sigmoID') # postprocessing - equivalent to time-diStributed dense self.z_p = Conv1D(32,activation='relu') def call(self,inputs): x = self.x(inputs) f = self.x_f(X) g = self.x_g(X) z = tf.multiply(f,g) z = self.z_p(z) return tf.add(x,z),z def get_config(self): config = super(ResIDualBlock,self).get_config() config.update({"n_filters": self.n_filters,"filter_wIDth": self.filter_wIDth,"dilation_rate": self.dilation_ratE}) return config class WaveNet(tf.keras.Model): def __init__(self,n_filters=32,filter_wIDth=2,dilation_rates=None,drop_out=0.2,pred_length=24): super().__init__(name='WaveNet') # Layer Parameter self.n_filters = n_filters self.filter_wIDth = filter_wIDth self.drop_out = drop_out self.pred_length = pred_length if Dilation_rates is None: self.dilation_rates = [2 ** i for i in range(8)] else: self.dilation_rates = dilation_rates # Layer self.resIDual_stacks = [] for dilation_rate in self.dilation_rates: self.resIDual_stacks.append(ResIDualBlock(self.n_filters,self.filter_wIDth,dilation_ratE)) # self.add = Add() self.cut = Lambda(slice,arguments={'seq_length': pred_length}) self.conv_1 = Conv1D(128,padding='same') self.relu = Activation('relu') self.drop = Dropout(drop_out) self.skip = Lambda(lambda x: x[:,-2 * pred_length + 1:-pred_length + 1,:1]) self.conv_2 = Conv1D(1,padding='same') def _unroll(self,inputs,**kwargs): outputs = inputs skips = [] for resIDual_block in self.resIDual_stacks: outputs,z = resIDual_block(outputs) skips.append(z) outputs = self.relu(Add()(skips)) outputs = self.cut(outputs) outputs = self.conv_1(outputs) outputs = self.relu(outputs) outputs = self.drop(outputs) outputs = Concatenate()([outputs,self.skip(inputs)]) outputs = self.conv_2(outputs) outputs = self.cut(outputs) return outputs def _get_output(self,input_tensor): pass def call(self,Training=false,**kwargs): if Training: return self._unroll(inputs) else: return self._get_output(inputs) @H_403_18@
训练步骤
@H_944_19@model = WaveNet() model.compile(Adam(),loss=loss) # ok history = model.fit(Train_x,Train_y,batch_size=batch_size,epochs=epochs,callBACks=[cp_callBACk] if save else NonE) # ok result = model.preDict(test_X) # error model.save("path") @H_403_18@
暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!
如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。
小编邮箱:dio#foxmail.com (将#修改为@)
以上是大佬教程为你收集整理的Tensorflow 2.2.0 Keras 子类模型可以训练和预测,但保存时抛出异常:维度大小必须均匀整除 X全部内容,希望文章能够帮你解决Tensorflow 2.2.0 Keras 子类模型可以训练和预测,但保存时抛出异常:维度大小必须均匀整除 X所遇到的程序开发问题。
如果觉得大佬教程网站内容还不错,欢迎将大佬教程推荐给程序员好友。
本图文内容来源于网友网络收集整理提供,作为学习参考使用,版权属于原作者。
如您有任何意见或建议可联系处理。小编QQ:384754419,请注明来意。