In this comprehensive tutorial, we’ll guide you through creating a Graphical User Interface (GUI) to capture images from your webcam. Leveraging OpenCV, we’ll swiftly grab frames from your camera, and with PyQt5, we’ll craft the interactive elements of the user interface. While there’s a plethora of tutorials on diverse Python applications, a full-fledged guide on developing a complete desktop application in Python is rare.
The intricacy of building GUIs often lies not in the coding itself but in the user interaction considerations. For example, if your application permits users to select a camera and capture an image, you need to account for scenarios like the user attempting to take a picture before selecting a camera—just one of many potential user interactions. Whether you’re diving into the world of GUI creation with our comprehensive step-by-step guide or exploring the intricacies of Python Singleton in-depth, our articles provide invaluable insights for developers seeking mastery in these essential realms of programming.
By the end of this tutorial, you’ll have a solid understanding of how to structure a project into modules for clarity and maintainability. You’ll gain hands-on experience in initiating a PyQt application from the ground up, gradually adding complexity. Ultimately, you will possess a practical example of interfacing with a real-world device via a GUI.
Setting Up OpenCV and PyQt5
The goal here is to construct an interface for webcam interaction. This requires two principal libraries: OpenCV for image acquisition and PyQt5 as the interface framework.
OpenCV is a comprehensive package compatible with various programming languages, capable of performing a wide array of image processing tasks, such as face detection and object tracking. Although our tutorial won’t cover the full spectrum of OpenCV’s capabilities, it’s important to recognize its extensive potential. To install OpenCV, you can simply execute the following command:
pip install opencv-contrib-python
It’s highly recommended to operate within a virtual environment to sidestep any potential conflicts with other library installations. The installation of OpenCV should typically include numpy as well. If any problems arise during the OpenCV installation process, feel free to seek assistance in the forum or consult the official documentation for guidance.
To verify the successful installation and setup of OpenCV, initiate the Python interpreter and input the following commands to ascertain the installed version:
>>> import cv2
>>> cv2.__version__
'3.4.2'
The subsequent step involves installing PyQt5, which is as straightforward as issuing the command:
pip install PyQt5
This installation generally proceeds without a hitch, but should you encounter difficulties, particularly on certain platforms, an excellent workaround is to install Anaconda, which comes with all necessary packages pre-installed across various platforms.
To test the PyQt5 installation, craft a brief script like the one below:
from PyQt5.QtWidgets import QApplication, QMainWindow
app = QApplication([])
win = QMainWindow()
win.show()
app.exit(app.exec_())
Upon executing the script, an empty window should appear, indicating that the installation is functional.
Next, to display the images captured from the webcam, various libraries are available. Matplotlib is a popular choice for graphing and can also handle 2D image plots. Pillow offers a robust suite for image operations in Python. Another option is pyqtgraph, which, while not as widely recognized in the broader Python community, is frequently used in scientific research settings.
This platform has a particular affinity for pyqtgraph, opting to use it for its extensive capabilities in image display. This choice not only supports the functionalities required but also promotes the noteworthy contributions to the project. Installation is straightforward:
pip install pyqtgraph
Introduction to OpenCV for Image Acquisition
Embarking on the development of applications that involve image acquisition requires a clear understanding of the objectives before diving into user interface design. OpenCV streamlines the process of capturing images from a webcam. To start reading from a webcam, execute the following Python code:
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
ret, frame = cap.read()
cap.release()
print(np.min(frame))
print(np.max(frame))
The initial line sets up communication with the camera. If no camera is connected, the cap.read() command will not capture any data, but importantly, it will not cause the program to crash. After interacting with the camera, it’s crucial to release it. The final lines output the minimum and maximum values captured by the camera, with ‘frame’ being a numpy 2D array representing the image.
Progressing further, capturing video involves a continuous loop that acquires and displays a new frame with each iteration. To exit, press ‘Q’ on your keyboard. The following example also includes a conversion of the image to grayscale, but you can omit this step to view the original color image.
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
The acquisition process is straightforward: initiate communication with the camera and then proceed to read from it. During this process, various adjustments can be made, either to the image output, such as converting it to grayscale, or directly to the camera’s settings. For instance, to increase the camera’s brightness, the following line is added immediately after initializing VideoCapture:
cap.set(cv2.CAP_PROP_BRIGHTNESS, 1)
Adjusting the camera property in this way makes the change persistent; the brightness setting will remain at the altered level until it is explicitly reset to its default value, typically 0.5, and this persists even after the program is restarted. It’s advisable to refer to the OpenCV documentation on camera properties to explore the full range of adjustable parameters. Be aware that not all properties are supported by every camera model, which might result in errors or no apparent change.
Creating a video entails capturing frames in a continuous loop. While we won’t delve into the specifics here, be mindful that certain conditions, like extended exposure times, can increase the duration of frame acquisition, which could complicate the continuous capture required for video.
Introduction to PyQt and Qt Framework
Qt is a comprehensive, cross-platform library initially written in C++ and made available for various platforms. PyQt represents the Python bindings for Qt, essentially adapting the original C++ code into Python objects for use within Python environments. A notable challenge when working with PyQt arises from the majority of documentation being tailored for the original C++ code, necessitating users to mentally map between languages. While this learning curve requires some investment of time, once accustomed, it becomes quite manageable.
Please be aware that there is an alternative Python binding for Qt known as PySide2. Released officially by the Qt company, PySide2 operates under a different licensing model and, for all intents and purposes, functions similarly to PyQt in practical application. For those concerned about code licensing issues, it is worth exploring both options to determine which aligns best with your project’s needs.
At its core, a user interface is driven by an event loop that continually redraws windows, processes user interactions, and updates visual elements like webcam feeds. When this loop is interrupted, the application concludes, and all open windows are closed. Now, let’s begin by constructing a basic window:
from PyQt5.QtWidgets import QApplication, QMainWindow
app = QApplication([])
win = QMainWindow()
win.show()
app.exit(app.exec_())
The event loop in a PyQt application is initiated by the app.exec_() call. Without this line, the program will execute, but no user interface will be displayed, as the event loop is not running. The app.exit() method encapsulates the event loop to ensure the application exits cleanly once the loop is terminated. It is critical to instantiate the QApplication object before creating any widgets, as failing to do so results in a telling error:
QWidget: Must construct a QApplication before a QWidget
Aborted (core dumped)
In the PyQt framework, and Qt at large, the fundamental elements that compose a window are known as Widgets. These include not just the window itself but also buttons, dialogs, images, icons, and more. Developers also have the capability to craft custom widgets. The code snippet shown earlier demonstrates the emergence of a simple, unadorned window. To add a touch of interactivity, one might insert a button into this window:
from PyQt5.QtWidgets import QApplication, QMainWindow, QPushButton
app = QApplication([])
win = QMainWindow()
button = QPushButton('Test')
win.setCentralWidget(button)
win.show()
app.exit(app.exec_())
In PyQt, buttons are represented by the QPushButton class. While certain aspects of the code remain consistent, like the creation of the QApplication and the execution of the event loop, the QPushButton requires a label text upon creation. If you’re working with a QMainWindow, you can set the button as its central widget, a necessity for main windows to function properly. The resulting interface is rudimentary, showcasing a main window with a singular button.
Although simple in appearance, it serves as a fundamental building block. The next step is to assign an action to the button press. This is where the concept of Signals and Slots, central to Qt’s event handling, comes into play.
Implementing Signals and Slots in Qt for Dynamic Applications
In the realm of complex application development, especially those with a graphical user interface (GUI), the need to initiate particular actions in response to specific events is common. Imagine an application interfacing with a webcam to capture video. Upon completion, you might want to notify the user via email, save the video to disk, or even upload it to a platform like YouTube. Later, you could decide to initiate the save operation when a user clicks a button, or start an upload in response to an incoming email.
The ideal scenario for such dynamic functionality is a programming model where you can ‘subscribe’ functions to signals that are emitted at crucial moments. For instance, once a video capture concludes, the application can emit a signal which then prompts all subscribed functions or ‘listeners’ to act. This model allows the video acquisition code to remain constant while the subsequent actions—what occurs post-capture—can be modified or extended easily.
Conversely, you write the function to save the video once, and then trigger it in response to different events: the end of video capture, a button press, and so on. The unpredictable nature of user interaction—whether they capture an image, record a video, or attempt to save data prematurely—makes it particularly useful to be able to link actions to specific events.
In Qt, this concept is operationalized through Signals and Slots. Signals are emitted at predetermined moments, and Slots are the corresponding actions that execute in response. Taking the earlier example of a QPushButton, the action of pressing the button generates a signal. The slot could be any function we designate; in our case, we’ll have it output a message to the terminal.
from PyQt5.QtWidgets import QApplication, QMainWindow, QPushButton
def button_pressed():
print('Button Pressed')
app = QApplication([])
win = QMainWindow()
button = QPushButton('Test')
button.clicked.connect(button_pressed)
win.setCentralWidget(button)
win.show()
app.exit(app.exec_())
Firstly, define the function for the desired action. In this example, the function is button_pressed. The key interaction occurs in the following line, where the ‘clicked’ signal of the button is connected to the button_pressed function. Note the absence of parentheses in this line. Running the program and clicking the button now triggers a message in the terminal.
Building on the previous discussion, another function can be added that also responds to the button press. The resulting code might look like this, with common parts omitted for brevity:
def button_pressed():
print('Button Pressed')
def new_button_pressed():
print('Another function')
button.clicked.connect(button_pressed)
button.clicked.connect(new_button_pressed)
Upon rerunning the program, you’ll observe that each time the button is pressed, two messages are displayed on the terminal. Alternatively, you could have employed functions imported from various packages. To complete this example, add a second button and link its clicked signal to the button_pressed function.
Integrating a new widget into a Main Window involves some additional procedures. As mentioned earlier, each main window necessitates one central widget.
You have the flexibility to include typical window elements like a menu, toolbar, etc., but it’s crucial to note that a window typically has only one central widget. To accommodate the addition of two buttons, it’s advisable to create an empty widget that will serve as a container for these buttons. Consequently, this widget will then become the central widget of the window.
from PyQt5.QtWidgets import QApplication, QMainWindow, \
QPushButton, QVBoxLayout, QWidget
app = QApplication([])
win = QMainWindow()
central_widget = QWidget()
button = QPushButton('Test', central_widget)
button2 = QPushButton('Second Test', central_widget)
win.setCentralWidget(central_widget)
win.show()
app.exit(app.exec_())
When defining the buttons, the second argument indicates the parent of the widget. This provides a quick way to add elements to widgets and establish a clear hierarchical relationship, as we’ll explore later. Upon running the provided code, you’ll notice only the “Second Test” button is visible. Changing the order in which you define button and button2 will reveal that one button overlays the other. This occurs because the “Second Test” button, taking up more space, obscures the “Test” button beneath it.
To set the position of the buttons (or any other widget), utilize the setGeometry method, which takes four arguments. The first two specify the x, y coordinates relative to the parent widget. As widgets can be nested, it’s crucial to keep this in mind. The remaining two arguments define the width and height. Consider the following:
button.setGeometry(0,50,120,40)
Executing this will shift the “Test” button 50 pixels down and set its dimensions to 120 pixels in width and 40 pixels in height.
The example with two buttons, one above the other, may not be aesthetically remarkable, but it effectively demonstrates the basic principles of GUI layout in PyQt. For those wanting to experiment further, adjusting the main window’s dimensions using the setGeometry method can offer insights into how Qt handles widget placement and window sizing. This experimentation can reveal both the power and the complexity of Qt in achieving precise visual layouts.
Now, returning to the functionality of the buttons, the process of linking them to functions remains the same as before, using the ‘clicked’ signal:
button.clicked.connect(button_pressed)
button2.clicked.connect(button_pressed)
Running the updated program shows that clicking either button triggers the same function. However, PyQt allows for flexibility; each button can be connected to distinct functions, or even multiple functions. This approach simplifies code maintenance but can initially be more challenging for beginners to navigate. Since actions can be defined anywhere in the program, tracing the flow of what happens and when might require some time to fully grasp.
Streamlining GUI Design with Qt Layouts
While manually setting the geometry for buttons is feasible, it can be cumbersome, especially when dealing with dynamic content like text that might not fit within predefined dimensions. Additionally, manually tracking and positioning each element, such as placing one button below another, becomes increasingly unwieldy with more complex interfaces involving various widgets and input fields. This is where Qt’s Layouts become invaluable, offering a more efficient and streamlined approach to GUI design.
Layouts in Qt provide a systematic way to arrange UI elements in relation to each other. For example, to stack two buttons vertically, a vertical layout would be the ideal choice. Layouts are applied to widgets, including the central widget in a QMainWindow. Implementing this in the provided example would look something like this:
from PyQt5.QtWidgets import QApplication, QMainWindow, \
QPushButton, QVBoxLayout, QWidget
app = QApplication([])
win = QMainWindow()
central_widget = QWidget()
button2 = QPushButton('Second Test', central_widget)
button = QPushButton('Test', central_widget)
layout = QVBoxLayout(central_widget)
layout.addWidget(button2)
layout.addWidget(button)
win.setCentralWidget(central_widget)
win.show()
app.exit(app.exec_())
Feel free to experiment with resizing the window and observe how the buttons adapt—a notable contrast to scenarios without using layouts. If you prefer placing buttons side by side, you can employ a QHBoxLayout, but the remainder of the code remains unchanged. Connecting signals to functions functions the same way, as the button remains consistent whether it is within a layout or not.
Capturing Images from the GUI
Moving on to integrating image acquisition into the GUI, you’ve taken a foundational step in Qt interface development. Now, let’s put our interface to practical use by controlling the webcam. As you’ve witnessed, connecting buttons to functions is straightforward. We can leverage the previously demonstrated approach to capture a frame from the camera. First, let’s import OpenCV and define the functions we’ll use:
import cv2
import numpy as np
from PyQt5.QtWidgets import QApplication, QMainWindow, \
QPushButton, QVBoxLayout, QWidget
cap = cv2.VideoCapture(0)
def button_min_pressed():
ret, frame = cap.read()
print(np.min(frame))
def button_max_pressed():
ret, frame = cap.read()
print(np.max(frame))
We’ve defined two functions—one to output the minimum value of the recorded frame and another for the maximum. Now, let’s complete the rest of the user interface and link the two buttons to these functions. Take note of the updated names assigned to the buttons:
app = QApplication([])
win = QMainWindow()
central_widget = QWidget()
button_min = QPushButton('Get Minimum', central_widget)
button_max = QPushButton('Get Maximum', central_widget)
button_min.clicked.connect(button_min_pressed)
button_max.clicked.connect(button_max_pressed)
layout = QVBoxLayout(central_widget)
layout.addWidget(button_min)
layout.addWidget(button_max)
win.setCentralWidget(central_widget)
win.show()
app.exit(app.exec_())
cap.release()
As functionality expands, like displaying messages on the terminal upon button clicks indicating the maximum or minimum values in an image, the code complexity also increases. The next logical step is to integrate image display within the GUI, but this adds further complexity. To manage this, it’s crucial to restructure the program for efficiency and clarity, particularly in handling image acquisition and processing. This calls for a more organized program layout, and the Model-View-Controller (MVC) design pattern offers a structured solution.
Improving Code Structure with MVC
The focus shifts to refining the code by creating modular classes and files that can be seamlessly integrated into a main file. To avoid confusion, file names will be highlighted in bold. These files should all reside in the same directory with write access.
Crafting effective and sustainable programs requires thoughtful design beyond just coding. While there’s no one-size-fits-all solution, certain best practices like the MVC pattern significantly enhance code clarity, especially for newcomers. The MVC pattern, extensively discussed in the context of web development, has specific implications for desktop applications interacting with real-world devices.
In this context, the ‘Controller’ would be akin to a device driver, such as those provided by OpenCV for camera interaction. In some cases, custom drivers may be developed for specific needs. The ‘Model’ encapsulates the logic of device utilization, which may differ from the device’s intended functionality. For example, creating a ‘movie’ method for a camera that primarily captures single frames, incorporating necessary checks and balances as per the application’s requirements.
The ‘View’ directly corresponds to the user interface, encompassing all elements related to Qt. It’s vital to segregate logic from the view – any operational constraints, such as readiness of the webcam, should be handled by the model, not the view.
While MVC is a widely recognized pattern, its components can vary in meaning, especially when developing an application from scratch, as in this tutorial. Web development frameworks like Django or Flask inherently guide developers towards specific patterns. However, in the realm of desktop and scientific applications, such frameworks are less mature, often requiring a ground-up approach to structure and design.
Developing the Camera Model with OpenCV Integration
With OpenCV handling the controller aspect of the camera, the next step is to construct the camera model. This model serves as an abstraction layer, simplifying interactions with the camera. Begin by creating a file named models.py, and outline the basic structure of the Camera class:
class Camera:
def __init__(self, cam_num):
pass
def get_frame(self):
pass
def acquire_movie(self, num_frames):
pass
def set_brightness(self, value):
pass
def __str__(self):
return 'Camera'
This model is intentionally simple, designed to provide a foundational framework. For those interested in more complex models, examples like the one developed for a Hamamatsu Orca camera offer a glimpse into advanced implementations. The primary benefit of this approach is its flexibility: if a decision is made to change the camera or its driver, only the model needs updating, allowing the rest of the program to function uninterrupted.
The model is designed with certain functionalities in mind. The __init__ method accepts a camera number, corresponding to OpenCV’s VideoCapture requirement. The get_frame and acquire_movie methods will handle the retrieval of single frames and sequences of frames from the camera, respectively. set_brightness demonstrates how to adjust a camera setting, and the __str__ method aids in identifying the camera, which will be particularly useful in the GUI.
The skeleton of the model is now in place. The next step is to imbue these methods with functionality. Utilizing a class structure allows for the storage of important parameters, like the cap variable from OpenCV, within the class. This makes these parameters readily accessible to all methods of the class, streamlining the interaction with the camera.
def __init__(self, cam_num):
self.cap = cv2.VideoCapture(cam_num)
self.cam_num = cam_num
def __str__(self):
return 'OpenCV Camera {}'.format(self.cam_num)
The __str__ method has been updated to reflect that it’s an OpenCV camera and to display its number. For a quick test of the model, append the following code block to the models.py file:
if __name__ == '__main__':
cam = Camera(0)
print(cam)
Running models.py directly will print a message to the screen. However, it’s noticeable that the camera is not being properly closed in this example. Directly accessing the cam.cap attribute is possible, but a more elegant solution is to avoid direct interaction with the controller, especially since future camera models might require different methods for termination. To address this, introduce a method to close the camera:
def close_camera(self):
self.cap.release()
Additionally, it’s beneficial to initiate camera communication not at class instantiation but at a chosen moment. This approach allows for reopening the camera after the close_camera method has been called:
def __init__(self, cam_num):
self.cam_num = cam_num
self.cap = None
def initialize(self):
self.cap = cv2.VideoCapture(self.cam_num)
The __init__ method sets self.cap to None, adhering to the best practice of declaring all class attributes in the initializer. This practice aids in quickly identifying available attributes and checks the status of cap before its use. Update the test block at the end of the file accordingly:
if __name__ == '__main__':
cam = Camera(0)
cam.initialize()
print(cam)
cam.close_camera()
The next step is defining the methods for camera operation, considering whether to return values for use by other modules or store them within the class. A combination of these approaches is also possible, offering flexibility in how the camera data is handled and integrated into the broader application structure.
def get_frame(self):
ret, self.last_frame = self.cap.read()
return self.last_frame
If you’ve been following along, the progression of the camera model should be straightforward. Notably, the model now includes storing the most recent frame in the self.last_frame attribute. To demonstrate this functionality, the example code at the end of models.py can be updated:
if __name__ == '__main__':
cam = Camera(0)
cam.initialize()
print(cam)
frame = cam.get_frame()
print(frame)
cam.close_camera()
Executing this will display a lengthy array representing the data captured by your camera. The next step is to implement the method for capturing a sequence of images, essentially creating a movie. To avoid the pitfalls of infinite loops, a parameter specifying the number of frames is introduced:
def acquire_movie(self, num_frames):
movie = []
for _ in range(num_frames):
movie.append(self.get_frame())
return movie
This method initiates an empty list to hold the frames, then iterates for the specified number of frames, appending each captured frame to the list. This approach also conveniently updates the last_frame attribute with each capture.
Note: In more advanced camera setups, starting a movie capture and reading frames are typically separate processes, ensuring consistent timing between frames.
It’s important to acknowledge that this method isn’t the most efficient—appending to lists can be slow, and large numbers of frames might lead to memory issues. Nonetheless, it serves as a functional starting point.
For the set_brightness method, the implementation is straightforward:
def set_brightness(self, value):
self.cap.set(cv2.CAP_PROP_BRIGHTNESS, value)
One might wonder if it’s possible to retrieve the current brightness setting. This can be achieved by replacing cap.set with cap.get. The same approach applies to other camera properties. This leads to the addition of a new method, get_brightness:
def get_brightness(self):
return self.cap.get(cv2.CAP_PROP_BRIGHTNESS)
To utilize these new methods, enhance the __main__ block as follows:
cam.set_brightness(1)
print(cam.get_brightness())
cam.set_brightness(0.5)
print(cam.get_brightness())
Remember, changing camera settings like brightness will persist across sessions and applications. If set too high or too low, it might be noticeable in other uses, such as video calls.
With the camera model now fully fleshed out, the next phase involves developing the user interface to interact with these functionalities.
Enhancing PyQt Windows through Subclassing
Initially, our exploration with PyQt windows was conducted using script files. While functional, this approach is not ideal for maintaining and reusing code. A more effective strategy is to create classes that inherit from PyQt’s base classes. As an example, let’s refine our two-button window setup in a structured manner. This involves creating a new file, views.py, and implementing the following code:
from PyQt5.QtWidgets import QMainWindow, QWidget, QPushButton, QVBoxLayout, QApplication
class StartWindow(QMainWindow):
def __init__(self):
super().__init__()
self.central_widget = QWidget()
self.button_min = QPushButton('Get Minimum', self.central_widget)
self.button_max = QPushButton('Get Maximum', self.central_widget)
self.layout = QVBoxLayout(self.central_widget)
self.layout.addWidget(self.button_min)
self.layout.addWidget(self.button_max)
self.setCentralWidget(self.central_widget)
The final version of this views.py file is also available in the repository. This class, StartWindow, inherits from QMainWindow and encapsulates all the functionality we previously scripted. It includes the essential step of calling super().__init__() to inherit properties and methods from QMainWindow. We then define an empty widget, two buttons, and a layout, adding each element as an attribute of the class for easy access throughout.
Using this window class is straightforward. The following code can be appended to the end of views.py:
if __name__ == '__main__':
app = QApplication([])
window = StartWindow()
window.show()
app.exit(app.exec_())
This reduces the process to just four lines to display a window with two neatly arranged buttons. To add functionality to these buttons, simply introduce methods within the StartWindow class:
def __init__(self):
[...]
self.button_max.clicked.connect(self.button_clicked)
def button_clicked(self):
print('Button Clicked')
In this truncated example, the button click is now linked to a method within the class, streamlining the integration of additional functionalities. Running views.py will display the same window as before but with the enhanced capability of responding to button clicks. This method of organizing PyQt applications not only simplifies the code but also enhances its modularity and reusability.
Incorporating Image Display into the PyQt GUI
The next step in our PyQt journey involves adding the capability to display images in the GUI. A key consideration is determining how to activate the camera within the GUI context. Ideally, the camera model should be integrated within the StartWindow class, allowing the image update method to be structured as follows:
def update_image(self):
frame = self.camera.get_frame()
# Plot_the_frame
For this to work, self.camera needs to be initialized within the class. To achieve this, the camera can be passed as an argument to the __init__ method of StartWindow:
class StartWindow(QMainWindow):
def __init__(self, camera):
super().__init__()
self.camera = camera
This approach effectively links the model (camera) with the view (GUI window), providing a straightforward and debuggable solution. This setup suggests the presence of a third file where the models and views are combined, but let’s first focus on finalizing the view to interact with the camera. The next step involves updating the buttons within StartWindow and connecting one of them to the update_image method to facilitate image display upon button activation. This integration will bring the GUI to life, allowing real-time interaction with the camera.
import numpy as np
from PyQt5.QtWidgets import QMainWindow, QWidget, QPushButton, QVBoxLayout, QApplication
class StartWindow(QMainWindow):
def __init__(self, camera = None):
super().__init__()
self.camera = camera
self.central_widget = QWidget()
self.button_frame = QPushButton('Acquire Frame', self.central_widget)
self.button_movie = QPushButton('Start Movie', self.central_widget)
self.layout = QVBoxLayout(self.central_widget)
self.layout.addWidget(self.button_frame)
self.layout.addWidget(self.button_movie)
self.setCentralWidget(self.central_widget)
self.button_frame.clicked.connect(self.update_image)
def update_image(self):
frame = self.camera.get_frame()
print('Maximum in frame: {}, Minimum in frame: {}'.format(np.max(frame), np.min(frame)))
The updated structure of the GUI retains the same format, with minor modifications to button names and labels. To bring together the camera model and the GUI view, a new file named start.py is created, containing the following code:
from PyQt5.QtWidgets import QApplication
from models import Camera
from views import StartWindow
camera = Camera(0)
camera.initialize()
app = QApplication([])
start_window = StartWindow(camera)
start_window.show()
app.exit(app.exec_())
This code imports the camera model, initializes it, and then passes it to StartWindow. The process is similar to the example at the end of the views.py file. When the “Acquire Frame” button is pressed, the terminal will display the camera’s intensity values.
The final step involves displaying the camera’s captured image in the GUI, and this is where PyQtGraph comes into play. A new widget, capable of holding an image, is added to StartWindow. The code focusing on these new elements is as follows:
from pyqtgraph import ImageView
class StartWindow(QMainWindow):
def __init__(self, camera = None):
[...]
self.image_view = ImageView()
self.layout.addWidget(self.image_view)
Running start.py now will display a black area beneath the buttons, designated for the image. The method update_image is then updated to display the captured frame:
def update_image(self):
frame = self.camera.get_frame()
self.image_view.setImage(frame.T)
When this program is run, it will display the image captured by the camera within the GUI.
Observe that we are updating frame.T instead of frame; this choice is related to how pixels are organized and how PyQtGraph assumes their arrangement. The ‘T’ is utilized to transpose the matrix, swapping columns for rows. Experimenting with the program reveals that you can zoom in and out using the mouse scroll, adjust levels, and modify the color profile. While PyQtGraph is primarily designed for scientific data rather than photography, not all options may be practical for a webcam, but you can discover some interesting features.
Integrating a Brightness Control Slider in the PyQt GUI
To enhance the GUI’s functionality, let’s introduce a slider for adjusting the brightness of the image. This feature is implemented within the __init__ method of the StartWindow class. The following code snippet highlights the key additions for this functionality:
from PyQt5.QtCore import Qt
from PyQt5.QtWidgets import QSlider
class StartWindow(QMainWindow):
def __init__(self, camera = None):
super().__init__()
self.slider = QSlider(Qt.Horizontal)
self.slider.setRange(0,10)
[...]
self.layout.addWidget(self.slider)
This code creates a horizontal slider with a range set from 0 to 10. These integer values will later be converted to floating-point numbers between 0 and 1, as brightness levels typically operate within this range. Like buttons, sliders in PyQt emit signals when their value changes. The valueChanged signal of the slider is connected to a method that updates the brightness:
def __init__(self, camera):
[...]
self.slider.valueChanged.connect(self.update_brightness)
def update_brightness(self, value):
value /= 10
self.camera.set_brightness(value)
The update_brightness method receives the current slider value as an argument. Since the brightness range for the camera is between 0 and 1, the slider value is divided by 10 before setting the camera’s brightness. Note that changes in brightness will only be noticeable when a new image is acquired. Optionally, the slider’s value change could also be linked to trigger a new image acquisition, providing immediate feedback on the brightness adjustment.
Implementing Movie Acquisition with Qt Threads in PyQt GUI
To enable movie acquisition in our user interface, we’ll first establish a connection between the button and the camera model’s method for movie capture. To start, we’ll use a predetermined number of frames for testing purposes:
def __init__(self, camera):
[...]
self.button_movie.clicked.connect(self.start_movie)
def start_movie(self):
self.camera.acquire_movie(200)
However, executing this code reveals a significant issue: the user interface becomes unresponsive during the movie capture. This is due to the acquire_movie method’s lengthy execution time, which blocks the main application loop. To address this, the movie acquisition process needs to be offloaded to a separate thread, ensuring that it doesn’t hinder the responsiveness of the main thread.
To implement this solution, we’ll introduce a new class called MovieThread in views.py:
class MovieThread(QThread):
def __init__(self, camera):
super().__init__()
self.camera = camera
def run(self):
self.camera.acquire_movie(200)
Then, modify the start_movie method to utilize this new thread:
def start_movie(self):
self.movie_thread = MovieThread(self.camera)
self.movie_thread.start()
With this code alone, we initiate a new thread where the camera captures frames, but the frames are not yet being displayed. To achieve this, we will establish a timer responsible for periodically updating the displayed image.
Advanced Features for Your PyQt GUI Application
Having established a foundational understanding of building a user interface in PyQt, there are several additional enhancements and functionalities you might consider implementing to further enrich your application:
- User-Defined Frame Count: Currently, the number of frames for movie acquisition is hard-coded. Enhance this by incorporating a QLineEdit widget, enabling users to input their desired number of frames. This addition will make your application more flexible and user-friendly;
- Continuous Movie Acquisition: Extend the functionality to allow for continuous movie recording. Modify the loop in the model to run indefinitely when the number of frames is set to 0 or None. Implementing this feature requires careful consideration of how to effectively stop the movie recording when needed, ensuring a responsive and user-controlled experience;
- Data Storage and Management: The current model accumulates all frame data in an attribute. Consider adding functionality to save the captured movie or individual images to a file. Introducing a new button for this purpose, along with implementing data storage solutions like HDF5 files, could greatly enhance the usability of your application, especially for users needing to retain or analyze captured data.
These additional features not only challenge your programming skills but also significantly augment the practicality and versatility of your PyQt application. Implementing these enhancements will provide a more comprehensive user experience and demonstrate a deeper understanding of application development using PyQt.
Conclusion
Throughout this article, an insightful journey into the world of creating user interfaces that interact with real-world devices, like a camera, has been undertaken. Significant ground has been covered, and it’s important to remember that this is just an introduction to a vast and intricate field. The techniques and concepts discussed here provide a solid foundation for building more complex and dynamic interfaces.
As you move forward, remember that the possibilities in GUI development are virtually limitless. The skills and knowledge acquired here are just the beginning, paving the way for exploration, innovation, and the expansion of capabilities in developing sophisticated projects. Whether enhancing functionality, refining user experience, or integrating more complex systems, there’s always more to learn and create in the exciting realm of GUI development.