程序问答   发布时间:2022-06-02  发布网站:大佬教程  code.js-code.com
大佬教程收集整理的这篇文章主要介绍了密层。没有TF的python中的网络大佬教程大佬觉得挺不错的,现在分享给大家,也给大家做个参考。

如何解决密层。没有TF的python中的网络?

开发过程中遇到密层。没有TF的python中的网络的问题如何解决?下面主要结合日常开发的经验,给出你关于密层。没有TF的python中的网络的解决方法建议,希望对你解决密层。没有TF的python中的网络有所启发或帮助;

我有一些任务要在没有 tensorflow 的情况下编写层。我有代码,但它不起作用,我不明白为什么。

import numpy as np
from tyPing import Optional,Tuple

class Dense:

    def __init__(self,n_in: int,n_out: int,use_bias: bool = TruE):
        self.n_in = n_in
        self.n_out = n_out
        self.use_bias = use_bias
        self.w = np.random.normal(0.0,1,(self.n_in,self.n_out))
        self.bIEs = np.random.random(size=(self.n_out))
        

    @property
    def weights(self) -> Tuple[np.ndarray,Optional[np.ndarray]]:
       """Returns weights used by the layer."""
        return self.w

    @property
    def input(self) -> np.ndarray:
        """Returns the last input received by the layer"""
        return self.n_in
        
    def __call__(self,x: np.ndarray) -> np.ndarray:
        """Performs the layer forWARD pass.

        Arguments:
            x: input array of shape (`batch_size`,`n_in`)

        Returns:
            An array of shape (`batch_size`,`n_out`)"""
        if sefl.use_bias:
            res_i_w = sefl.x @ sefl.w + self.bIEs
        else:
            res_i_w = sefl.x @ sefl.w
        return np.reshape(res_i_w,(self.x.size,n_out))
        

    def grad(self,gradOutput: np.ndarray) -> Tuple[np.ndarray,Tuple[np.ndarray,Optional[np.ndarray]]]:
        """Computes layer gradIEnts

        Arguments:
            gradOutput: GradIEnt of loss function with respect to the layer output,an array of shape (`batch_size`,`n_out`).

        Returns:
            A tuple object:
                GradIEnt of loss function with respect to the layer input,`n_in`)
                GradIEnt of loss function with respect to the layer's weights:
                    An array of shape (`n_in`,`n_out`).
                    Optional array of shape (`n_out`,)."""
        return (gradOutput @ self.w.T,(res_i_w.T @ gradOutput,gradOutput.sum(axis=0)))

任务文本:

You need to implement a class in Python that describes a dense layer of a neural network.

Init:

The weights are initialized using uniformly diStributed values in range [-1,1]. 
Bias vector is not initialized if `use_bias` is false.
Weigths matrix has the shape (`n_in`,`n_out`),bias vector has the shape (`n_out`,).
        Arguments:
            n_in: Positive Integer,dimensionality of input space.
            n_out: Positive Integer,dimensionality of output space.
            use_bias: Whether the layer uses a bias vector."

解决方法

我不知道它是否正确计算了值,但我至少发现了错误并修复了它。

我只检查了 __call__

1:您在单词 sefl 中有错字 - 应该是 self

2:您必须使用 x 而不是 self.x

3:你必须使用 self.n_out 而不是 n_out

4:batch 表示包含许多示例的数组,batch_size 应该是这些示例的数量 x.shape[0]您使用 x.size 表示所有示例中所有值的数量。

5:它给出具有预期形状的结果,因此不需要`reshape

def __call__(self,x: np.ndarray) -> np.ndarray:
    """Performs the layer forWARD pass.

    Arguments:
        x: Input array of shape (`batch_size`,`n_in`)

    Returns:
        An array of shape (`batch_size`,`n_out`)"""

    #batch_size = x.shape[0]  # because (batch_size,n_in) = x.shape
    
    if self.use_bias:
        res_i_w = x @ self.w + self.bies
    else:
        res_i_w = x @ self.w
    
    #return np.reshape(res_i_w,(batch_size,self.n_out))
    return res_i_w

如果您在 self.bies = np.zeros((1,self.n_out))use_bias 时设置 false 那么您可以将其减少到一行

    return x @ self.w + self.bies

现在我可以用一些数据测试它是否有效 - 但我仍然不能说它是否正确计算:

n_in = 5
n_out = 3

example1 = list(range(0,n_in,1))
example2 = list(range(n_in,-1))

batch_input = np.array([example1,example2])
print(batch_input)

print('number of examples:',batch_input.shape[0])
print('n_in:',batch_input.shape[1])
 
layer = Dense(n_in,n_out,falsE) # ()

batch_output = layer(batch_input)
print(batch_output)

完整代码

我使用 np.random.seed(0) 在测试中始终具有相同的随机数。这样我就可以用相同的值多次运行它。通常你必须删除它

import numpy as np
from typing import Optional,Tuple

class Dense:

    def __init__(self,n_in: int,n_out: int,use_bias: bool = TruE):
        self.n_in = n_in
        self.n_out = n_out
        self.use_bias = use_bias
        self.w = np.random.normal(0.0,1,(self.n_in,self.n_out))
        self.bies = np.random.random(size=(self.n_out))

    @property
    def weights(self) -> Tuple[np.ndarray,Optional[np.ndarray]]:
        """Returns weights used by the layer."""
        return self.w

    @property
    def input(self) -> np.ndarray:
        """Returns the last input received by the layer"""
        return self.n_in
        
    def __call__(self,x: np.ndarray) -> np.ndarray:
        """Performs the layer forWARD pass.

        Arguments:
            x: Input array of shape (`batch_size`,`n_in`)

        Returns:
            An array of shape (`batch_size`,`n_out`)"""
        
        #batch_size = x.shape[0]  # because (batch_size,n_in) = x.shape
        
        if self.use_bias:
            res_i_w = x @ self.w + self.bies
        else:
            res_i_w = x @ self.w
        
        #return np.reshape(res_i_w,self.n_out))
        return res_i_w
        

    def grad(self,gradOutput: np.ndarray) -> Tuple[np.ndarray,Tuple[np.ndarray,Optional[np.ndarray]]]:
        """Computes layer gradients

        Arguments:
            gradOutput: Gradient of loss function with respect to the layer output,an array of shape (`batch_size`,`n_out`).

        Returns:
            A tuple object:
                Gradient of loss function with respect to the layer input,`n_in`)
                Gradient of loss function with respect to the layer's weights:
                    An array of shape (`n_in`,`n_out`).
                    Optional array of shape (`n_out`,)."""
        return (gradOutput @ self.w.T,(res_i_w.T @ gradOutput,gradOutput.sum(axis=0)))

# --- test ---

n_in  = 5
n_out = 3

example1 = list(range(0,example2])

print('--- batch_input ---')
print(batch_input)
print('number of examples :',batch_input.shape[0])
print('example size (n_in):',falsE)

batch_output = layer(batch_input)

print('--- batch_output ---')
print(batch_output)
print('number of examples  :',batch_output.shape[0])
print('example size (n_out):',batch_output.shape[1])

便说一句:有一个项目 ML-From-Scratch,所有代码都使用 Python(使用 numpy)。

第 Dense 层也在 return X.dot(self.W) + self.w0 中的一行 forWARD_pass() 中运行它

大佬总结

以上是大佬教程为你收集整理的密层。没有TF的python中的网络全部内容,希望文章能够帮你解决密层。没有TF的python中的网络所遇到的程序开发问题。

如果觉得大佬教程网站内容还不错,欢迎将大佬教程推荐给程序员好友。

本图文内容来源于网友网络收集整理提供,作为学习参考使用,版权属于原作者。
如您有任何意见或建议可联系处理。小编QQ:384754419,请注明来意。