Topic 1 Discussion

Through my readings of the provided materials, I summarized them into two categories. The first one is the adaptions to digital pedagogy. Second is the controversy of connecting students to the internet.

I never taught in a big classroom of hundreds of students, rightfully so. So when I was reading “Learning is not a mechanism” by Jesse Stommel, I was intrigued by the difficulty of constructing the perfect teaching/learning environment. Although Emily Dickinson describes schools as prisons, I realized it’s more like a factory. When teachers face numerous students, it’s impossible to choose the best learning method for each individual; Students are mere data in a spreadsheet, mass-produced in a generalized learning process. Even in a hybrid pedagogy setting, computer-marked tests generate transcripts that reflect on a student’s entire learning career. I would also like to hint that online learning is not limited to only lecture videos, youtube tutorials, or online open resources such as Khan academy expands that category. In these cases personalized learning experience is impossible. I totally agree with Jesse Stommel that “If there is a better sort of mechanism that we need for the work of digital pedagogy, it is a machine, an algorithm, a platform tuned not for delivering and assessing content, but for helping all of us listen better to students.”[9]. While teachers are still familiarizing themselves with new tools and technologies, I believe, through innovative researches, machines can provide a more personalized study environment.

However, through the current framework of machine learning, modules need a generous supply of training data. Regan, P.M., Jesse, J. explains six issues regarding privacy and big data. To summarize, users are uncertain about what’s behind the scene, and the legislation around this topic is immature. I personally had participated in a few machine learning projects and handled large datasets in an industrial setting. I have to agree that security is essential for most data, and that’s why most companies store them off-grid and only accessible within a physical perimeter. Moreover, on a macro level, an individual’s information is incredibly irrelevant to the whole system. Ibn Rushd once said, “Ignorance leads to fear.” Users are unsure what will happen with their information and fear for the worse. Therefore, I disagree with the fear of an individual being ‘targeted’ by a collection of activations functions and multiplications and summations, specially treated or surveillance. I believe privacy strongly depends on the ethics of the processing company; If users’ data is only used for training machines, then it’s impossible for the data to cause any harm besides achieving the machine’s intentions, such as predicting a student’s learning weak points. Ultimately, companies should go through a strict interview process before granting the privilege of collecting and using big data. Sadly, politicians or judges are equally unfamiliar with the hands-on model development and training process, which hints at misinformation and controversies on this topic.

Leave a Reply

Your email address will not be published. Required fields are marked *